+

WO2018193285A1 - Method and apparatus for enabling a scalable multicast virtual private network service across a multicast label distribution protocol network using in-band signaling - Google Patents

Method and apparatus for enabling a scalable multicast virtual private network service across a multicast label distribution protocol network using in-band signaling Download PDF

Info

Publication number
WO2018193285A1
WO2018193285A1 PCT/IB2017/052188 IB2017052188W WO2018193285A1 WO 2018193285 A1 WO2018193285 A1 WO 2018193285A1 IB 2017052188 W IB2017052188 W IB 2017052188W WO 2018193285 A1 WO2018193285 A1 WO 2018193285A1
Authority
WO
WIPO (PCT)
Prior art keywords
multicast
multicast stream
network
mldp
tunnel
Prior art date
Application number
PCT/IB2017/052188
Other languages
French (fr)
Inventor
Kotesh Babu CHUNDU
Gangadhara Reddy CHAVVA
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IB2017/052188 priority Critical patent/WO2018193285A1/en
Publication of WO2018193285A1 publication Critical patent/WO2018193285A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1886Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with traffic restrictions for efficiency improvement, e.g. involving subnets or subdomains
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2854Wide area networks, e.g. public data networks
    • H04L12/2856Access arrangements, e.g. Internet access
    • H04L12/2858Access network architectures
    • H04L12/2861Point-to-multipoint connection from the data network to the subscribers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling

Definitions

  • Embodiments of the invention relate to the field of packet networks; and more specifically, to enabling a scalable multicast virtual private network service across a multicast label distribution protocol network using in-band signaling.
  • Border Gateway Protocol Multiprotocol Label Switching Virtual Private Network (BGP/MPLS VPN) networks offer a VPN service to customers that enable sites of a VPN network (e.g., enterprise network) to transport network traffic to other sites of the VPN network using an MPLS provider network.
  • MVPN Multicast in VPN
  • Each of the provider's network and the VPN network include network devices (NDs) enabling the forwarding of the traffic from the source to the receivers of a multicast stream.
  • PEs Provider Edge
  • CEs Customer Edge
  • LSM Label Switched Multicast
  • LSP Multicast Label Switched Path
  • mLDP Multicast LDP
  • RRC Request for Comments
  • mLDP permits the creation of point-to-multipoint (P2MP) and multipoint-to-multipoint (MP2MP) multicast distribution trees within the MPLS network.
  • multicast packets are encapsulated in mLDP tunnels in the MPLS network and once they reach the end of the MPLS network, MPLS labels are de-capsulated and the inner multicast packets are forwarded as regular multicast packets in the IP domain.
  • Two signaling mechanisms may be used to map an IP multicast stream to an mLDP tunnel: 1) In-band signaling, and 2) Out-of-band signaling.
  • in-band signaling mapping multicast stream information along with an identifier (e.g., an address) of the root of a P2MP or MP2MP multicast LSP distribution tree is carried in a field of a label Forwarding Equivalence Class (FEC) message within the MPLS core network.
  • FEC Label Forwarding Equivalence Class
  • multicast stream information is carried through out-of-band routing protocols such as Border Gateway Protocol (BGP), PIM, etc.
  • Border Gateway Protocol BGP
  • PIM Border Gateway Protocol
  • each IP multicast steam in a given VPN network has an associated mLDP tunnel (or also referred to as an associated LSP tree) in the MPLS network.
  • mLDP tunnel or also referred to as an associated LSP tree
  • This one to one correspondence between IP multicast streams and mLDP tunnels causes scalability challenges at the provider's network (i.e., the MPLS network).
  • each PE of the provider's network typically supports several VPN customers (i.e., several VPN instances) and therefore would need to create and maintain states for a significant number of mLDP tunnels associated with these VPN customers. This causes an enormous load on the network devices of the MPLS network due to the number of states that need to be maintained for the mLDP tunnels.
  • Rosen MVPN (IETF RFC 6037) is a solution which realizes MVPN service using a concept of a multicast domain.
  • Rosen MVPN PEs of a provider's network create trees between each other.
  • the trees created can be of two types: Default Multicast Distribution Tree (MDT), or Data Multicast Distribution Tree (Data MDT).
  • MDT Default Multicast Distribution Tree
  • Data MDT Data Multicast Distribution Tree
  • the type of the tree created determines the traffic that the tree carries and the network devices that join the tree.
  • a default MDT acts as a Local Area Network (LAN) interface that connects corresponding PEs of a VPN. This is done regardless of whether a CE coupled with that PE wants to join a particular multicast stream inside a VPN network.
  • LAN Local Area Network
  • several multicast streams may be carried through the single default MDT tree over the MPLS network.
  • the default tree carries: 1) control plane traffic and 2) low-rate data plane traffic for particular sources.
  • the default MDT is constructed using a global multicast group address by running Protocol Independent Multicast (PIM) in the MPLS network.
  • PIM Protocol Independent Multicast
  • customer signaling will also be done using PIM across the PEs.
  • Default MDT will use GRE encapsulation in the data plane. All customer multicast streams will be encapsulated in the default MDT using GRE encapsulation and sent through the MPLS core network to egress PEs.
  • Rosen MVPN also enables service providers to create a separate MDT tree in the MPLS network for a given multicast stream using policy configuration called Data MDT (S- PMSI Selective Provider Multicast Service Interface). In this scenario, only the given multicast stream is transported on the Data MDT at the expense of extra states in the MPLS network. Data MDTs may be used for forwarding high rate multicast sources.
  • One general aspect includes a method of enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network.
  • the method includes receiving a first internet protocol (IP) multicast event message from a first network device of a first VPN instance, where the first IP multicast event message includes an identifier of the first VPN instance and an identification of a first multicast stream for which a first receiver of the first VPN instance requests to receive traffic.
  • IP internet protocol
  • the method also includes causing generation of a default multicast label distribution protocol (mLDP) tunnel for forwarding traffic of the first multicast stream from a first source to the first receiver of the first multicast stream through an MPLS network, where the MPLS network couples a first site of the first VPN instance including the first source and a second site of the first VPN instance including the first receiver.
  • the method also includes receiving a second IP multicast event message from a second network device of the first VPN instance, where the second IP multicast event message includes the identifier of the first VPN instance and an identification of a second multicast stream for which a second receiver of the first VPN instance requests to receive traffic.
  • mLDP multicast label distribution protocol
  • the method also includes determining whether the second multicast stream and the first multicast stream include traffic for the first VPN instance; and responsive to determining that the second multicast stream and the first multicast stream include traffic within the first VPN instance, causing packets of the second multicast stream to be forwarded through the default mLDP tunnel.
  • the method also includes receiving, over the default mLDP tunnel, packets of the first and the second multicast stream to be forwarded towards the first and the second receiver respectively.
  • a network device for enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network includes one or more processors; and non-transitory computer readable storage media storing instructions, which when executed by the one or more processors causes the network device to: receive a first internet protocol (IP) multicast event message from a first network device of a first VPN instance, where the first IP multicast event message includes an identifier of the first VPN instance and an identification of a first multicast stream for which a first receiver of the first VPN instance requests to receive traffic; cause generation of a default multicast label distribution protocol (mLDP) tunnel for forwarding traffic of the first multicast stream from a first source to the first receiver of the first multicast stream through an MPLS network, where the MPLS network couples a first site of the first VPN instance including the first source and a second site of the first VPN instance including the first receiver; receive a second IP multicast event message from a second network device of the first VPN instance, where the IP multicast event message includes an identifie
  • One general aspect includes a non-transitory computer readable storage medium storing instructions, which when executed by a processor of a network device causes the network device to perform operations including receiving a first internet protocol (IP) multicast event message from a first network device of a first VPN instance, where the first IP multicast event message includes an identifier of the first VPN instance and an identification of a first multicast stream for which a first receiver of the first VPN instance requests to receive traffic; causing generation of a default multicast label distribution protocol (mLDP) tunnel for forwarding traffic of the first multicast stream from a first source to the first receiver of the first multicast stream through an MPLS network, where the MPLS network couples a first site of the first VPN instance including the first source and a second site of the first VPN instance including the first receiver; receiving a second IP multicast event message from a second network device of the first VPN instance, where the second IP multicast event message includes the identifier of the first VPN instance and an identification of a second multicast stream for which a second receiver of the first
  • IP
  • One general aspect includes a method of enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network.
  • the method including monitoring a plurality of multicast streams transmitted over a default multicast label distribution protocol (mLDP) tunnel, where the default mLDP tunnel is used to forward the plurality of multicast streams from sources to receivers of a VPN instance through an MPLS network; responsive to determining that a first multicast stream from the plurality of multicast streams forwarded over the default mLDP tunnel does not satisfy a forwarding policy, performing the following: causing generation of a dedicated mLDP tunnel for forwarding packets of the first multicast stream; forwarding packets of the first multicast stream through the dedicated mLDP tunnel; forwarding packets of a subset of the plurality of multicast streams through the default mLDP tunnel, where the subset of the plurality of multicast streams does not include the first multicast stream.
  • mLDP multicast label distribution protocol
  • One general aspect includes a network device for enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network, the network device including one or more processors; and a non-transitory computer readable storage medium that stores instructions, which when executed by the one or more processors cause the network device to: monitor a plurality of multicast streams transmitted from sources to receivers of a VPN instance over a default multicast label distribution protocol (mLDP) tunnel of an MPLS network, responsive to determining that a first multicast stream from the plurality of multicast streams forwarded over the default mLDP tunnel does not satisfy a forwarding policy, perform the following: cause generation of a dedicated mLDP tunnel for forwarding packets of the first multicast stream; forward packets of the first multicast stream through the dedicated mLDP tunnel; and forward packets of a subset of the plurality of multicast streams through the default mLDP tunnel, where the subset of the plurality of multicast streams does not include the first multicast stream
  • One general aspect includes a non-transitory computer readable storage medium storing instructions, which when executed by a processor of a network device cause the network device to perform operations including: monitoring a plurality of multicast streams transmitted over a default multicast label distribution protocol (mLDP) tunnel, where the default mLDP tunnel is used to forward the plurality of multicast streams from sources to receivers of a VPN instance through an MPLS network; responsive to determining that a first multicast stream from the plurality of multicast streams forwarded over the default mLDP tunnel does not satisfy a forwarding policy, performing the following: causing generation of a dedicated mLDP tunnel for forwarding packets of the first multicast stream; forwarding packets of the first multicast stream through the dedicated mLDP tunnel; and forwarding packets of a subset of the plurality of multicast streams through the default mLDP tunnel, where the subset of the plurality of multicast streams does not include the first multicast stream.
  • mLDP multicast label distribution protocol
  • Figure 1 illustrates a block diagram of an exemplary multicast VPN service enabled across an MPLS network, according to a standard embodiment.
  • Figure 2 illustrates a block diagram of an exemplary scalable multicast VPN service enabled across an MPLS network, in accordance with some embodiments.
  • Figure 3 illustrates a block diagram of an exemplary scalable multicast VPN service enabled across an MPLS network, in accordance with some embodiments.
  • Figure 4A illustrates an exemplary LDP FEC message to be transmitted from a network device for generating an mLDP tunnel across the MPLS network, in accordance with some embodiments.
  • Figure 4B illustrates an exemplary opaque value of the LDP FEC message in accordance with some embodiments.
  • Figure 4C illustrates an exemplary opaque value of the LDP FEC message in accordance with some embodiments.
  • Figure 5 illustrates a flow diagram of exemplary operations for enabling a multicast VPN service across an MPLS network, in accordance with some embodiments.
  • Figure 6 illustrates a block diagram of an exemplary scalable multicast VPN service enabled across an MPLS network where a dedicated mLDP tunnel is used to forward traffic of a given multicast stream, in accordance with some embodiments.
  • Figure 7 illustrates an exemplary control message for causing the generation of a dedicated mLDP tunnel for a multicast stream, in accordance with some embodiments.
  • Figure 8 illustrates a flow diagram of exemplary operations for enabling scalable multicast VPN service across an MPLS network where a dedicated mLDP tunnel is used to forward traffic of a given multicast stream, in accordance with some embodiments.
  • Figure 9A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention.
  • Figure 9B illustrates an exemplary way to implement a special-purpose network device according to some embodiments of the invention.
  • Figure 9C illustrates a network with a single network element (NE) on each of the NDs, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.
  • NE network element
  • references in the specification to "one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • Bracketed text and blocks with dashed borders may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.
  • similar reference numerals have been used to denote similar elements such as components, features of a system and/or operations performed in a system or element of the system, when applicable.
  • Coupled is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other.
  • Connected is used to indicate the establishment of communication between two or more elements that are coupled with each other.
  • An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine -readable media (also called computer-readable media), such as machine -readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals).
  • machine -readable storage media e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory
  • machine-readable transmission media also called a carrier
  • carrier e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals.
  • an electronic device e.g., a computer
  • hardware and software such as a set of one or more processors (e.g., wherein a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding) coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data.
  • processors e.g., wherein a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding
  • an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower nonvolatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device.
  • Typical electronic devices also include a set or one or more physical network interface(s) (NI(s)) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices.
  • NI(s) physical network interface
  • a physical NI may comprise radio circuitry capable of receiving data from other electronic devices over a wireless connection and/or sending data out to other devices via a wireless connection.
  • This radio circuitry may include transmitter(s), receiver(s), and/or transceiver(s) suitable for radiofrequency communication.
  • the radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc.). The radio signal may then be transmitted via antennas to the appropriate recipient(s).
  • the set of physical NI(s) may comprise network interface controller(s) (NICs), also known as a network interface card, network adapter, or local area network (LAN) adapter.
  • NICs network interface controller
  • the NIC(s) may facilitate in connecting the electronic device to other electronic devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC.
  • One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
  • a network device is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices).
  • Some network devices are "multiple services network devices" that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video).
  • Figure 1 illustrates a block diagram of an exemplary multicast VPN service enabled across an MPLS core network according to a standard approach.
  • the illustrated scenario of Figure 1 illustrates a prior art scenario based upon in-band signaling, where each IP multicast steam in a given VPN network has an associated mLDP tunnel in the core MPLS network.
  • FIG. 1 there is a one to one correspondence between the IP multicast streams, (SI, Gl) and (S2, G2), that need to be forwarded from a first site of a VPN network (e.g., from network 107) and the mLDP tunnels (1, and 2) that carry the multicast traffic within the MPLS network 108 towards a second site of the VPN network (e.g., network 109).
  • Packets of the (SI, Gl) multicast stream that originate from the SI multicast source e.g., ND 101A
  • the SI multicast source e.g., ND 101A
  • Packets of the (S2, G2) multicast stream that originate from the S2 multicast source is forwarded through the mLDP tunnel 2.
  • the one-to-one correspondence between IP multicast streams and mLDP tunnels causes scalability challenges in the provider's network (i.e., the MPLS network 108).
  • each PE of the provider's network typically supports several VPN customers (i.e., several VPN instances), which are not shown in Figure 1 , and therefore would need to create and maintain states for a significant number of mLDP tunnels associated with all of these VPN customers.
  • the embodiments of the present invention provide a scalable solution for a multicast VPN service enabled across an MPLS core network.
  • a framework is proposed for aggregating multicast streams of a VPN network to be forwarded through a single default mLDP tunnel generated according to an in-band signaling mechanism.
  • mechanisms are proposed to enable an ingress PE of the MPLS network to forward packets of a given multicast stream through a separate dedicated mLDP tunnel, instead of using the default mLDP tunnel, upon determination that the multicast stream does not satisfy a policy requirement.
  • the embodiments of the present invention leverage the advantages of an enhanced mLDP in-band signaling and provide a scalable service with respect to VPN multicast traffic aggregation while limiting the LSP states maintained at the core MPLS network.
  • FIG. 2 illustrates a diagram of an exemplary scalable multicast VPN service enabled across an MPLS network, in accordance with some embodiments.
  • the networks 107, 109, and 110 may include any number of CEs and NDs acting as sources or receivers of multicast traffic streams.
  • the CEs of the networks 109 and 110 couple the receiver NDs (e.g., ND 106 or ND 116) with a PE (e.g., ND 104 or ND 114) of the MPLS network 108.
  • the CEs of the network 107 couple the source NDs (e.g., ND 101A or ND 101B) with a PE (e.g., PE 103) of the MPLS network 108.
  • Each CE or PE is a network device that can be implemented as described with reference to Figures 9A-C.
  • networks 107 and 109 are part of the same VPN instance that belongs to a customer of a service provider.
  • the service provider is an administrator or an owner of the MPLS network 108 that provides multiple networking services to customers, in particular multicast VPN services.
  • the MPLS network 108 includes a set of network devices such as routers or switches forming a provider network that implements the MPLS protocol.
  • the MPLS network can be a core network of a cellular network coupled to an access network (e.g., network 109, network 110).
  • the MPLS network 108 is an access network of the cellular network.
  • the network 109 and optional network 110 include multicast receivers (ND 106 and ND 116) which are receivers of multicast content (e.g., one or more multicast streams) from a source (e.g., ND 101A or 101B) of a multicast stream.
  • the networks 109 and 110 can include any number of receivers and the network 107 can include any number of sources without departing from the scope of the present invention.
  • the sources of the multicast streams can be coupled through the MPLS network 108 to any number of CEs and receivers. These networks can interface through any number of PEs such as ND 103, ND 104, and ND 114.
  • the example of Figure 2 illustrates a single VPN instance of a single customer (which includes several sites: network 107, network 109 and optional network 110)
  • the MPLS network 108 can provide multicast VPN services to multiple customers (i.e., to multiple VPN instances) without departing from the scope of the current invention.
  • the illustrated network of Figure 2 is simplified and will be described with respect to a single VPN instance (including the sites 107 and 109) for the sake of clarity only.
  • ND 105 is coupled with receivers (e.g., ND 106) from the VPN instance that request to receive traffic of a first multicast stream (SI, Gl) and a second multicast stream (S2, G2).
  • the ND 105 generates an IP multicast event message 11a (such as a PIM Join, Multicast Source Discovery Protocol (MSDP) Source Announcement (SA), BGP Source Active auto-discovery route or Rendezvous Point (RP) discovery).
  • MSDP Multicast Source Discovery Protocol
  • SA Source Announcement
  • RP Rendezvous Point
  • the message 11a includes an identification of the multicast stream (SI, Gl) and an identifier of the VPN instance to which the source and receiver of the multicast stream belong.
  • the ND 104 Upon receipt of the message 11a, the ND 104 causes the generation of a default mLDP tunnel for forwarding the traffic of the first multicast stream (SI, Gl). While in this embodiment, the first multicast stream is identified by corresponding source and group addresses, in other embodiments, only the source address is used without departing from the scope of the present invention.
  • the ND 104 maintains within its forwarding tables a correspondence between IP multicast trees of the multicast streams and the mLDP tunnel created. Therefore upon receipt of the IP multicast event message 11a, the ND 104 keeps track of an association between the mLDP tunnel (which is to be generated) and the first multicast stream as identified by the source and group addresses (S1,G1) and an identifier of the VPN instance to which the source and receiver belong.
  • the mLDP tunnel becomes part of the IP multicast tree associated with the multicast stream (SI, Gl).
  • ND 104 to generate the default mLDP tunnel, ND 104 performs an mLDP in-band signaling mechanism, in which multicast stream information is carried in an LDP Forwarding Equivalent Class (FEC) message through the MPLS network.
  • FEC LDP Forwarding Equivalent Class
  • the source and group addresses in the opaque value of the LDP FEC message causing the generation of the mLDP tunnel are set to wild cards (i.e., zeros).
  • the opaque value of the LDP FEC message includes an identifier of the VPN instance.
  • FIG 4A illustrates an exemplary LDP FEC message 400 to be transmitted from a network device for generating an mLDP tunnel across the MPLS network 108, in accordance with some embodiments.
  • the LDP FEC message 400 includes field 402 including the address of the root of the mLDP tunnel to be generated.
  • the root of the mLDP tunnel is ND 103
  • the root address added to the LDP FEC message is the IP address of ND 103.
  • the LDP FEC message further includes an opaque value field 404.
  • Figure 4B illustrates an exemplary opaque value 406 of the LDP FEC message in accordance with one embodiment.
  • the opaque value 406 includes a type 408 (identifying the type of the opaque value 406), length 410 (indicating the length of the opaque value 406), a field for the identification of the source of the multicast stream 412, a field for the identification of the group of the multicast stream 414, and a field for an identifier of the VPN network that is the route distinguisher (RD) 416.
  • the source and the group field are set to include a wildcard (*) therefore causing the mLDP tunnel to be generated to forward all multicast streams of the VPN network as identified with the route distinguisher RD.
  • ND 104 transmits the LDP FEC message 12 with the opaque value including the route distinguisher (RD) and wildcards for the group and the source of the multicast stream.
  • RD route distinguisher
  • All the receivers in network 109 of the VPN instance will trigger a single mLDP tunnel for carrying the multicast traffic from the source SI (ND 101A) across the MPLS network 108.
  • the generated mLDP tunnel 13 i.e., LSP distribution tree
  • the leaves initiate the mLDP tunnel setup and tear-down (e.g., ND 104 initiates the generation of the mLDP tunnel 13 by transmitting LDP FEC message 12 with opaque value (RD, *, *)) and install forwarding states to deliver the traffic received on the mLDP tunnel 13 to the receivers of the VPN instance within the network 109.
  • LDP FEC message 12 transit NDs (not illustrated in Figure 2) install MPLS forwarding states and propagate the mLDP setup (or tear-down) messages toward the root, ND 103.
  • the root of the mLDP tunnel 13, ND 103 installs forwarding states to map traffic into the mLDP tunnel 13 from the sources of the multicast streams included in network 107.
  • ND 104 further receives a second IP multicast event message (1 lb) from a second network device (e.g., ND 105) coupled to a second receiver of a second multicast stream (e.g., ND 106).
  • a second network device e.g., ND 105
  • a second receiver of a second multicast stream e.g., ND 106
  • the second network device and the second receiver of the second multicast stream (S2, G2) are the same as the CE and the receiver of the multicast stream (SI, Gl), however in other embodiments, these network devices may be different and part of the same VPN instance.
  • the second IP multicast event message l ib includes an identifier of the VPN instance and an identification of the second multicast stream (S2, G2) of which the ND 106 requests to receive traffic.
  • the ND 104 Upon receipt of the second IP multicast event message l ib, the ND 104 determines whether the second multicast stream relates to traffic for the first VPN instance for which an mLDP tunnel is already generated. In this example, the mLDP tunnel 13 is generated when the first IP multicast event message 11a is received. Therefore, in response to determining that the second multicast stream (S2, G2) relates to traffic for the first VPN instance, the ND 104 causes packets of the second multicast stream (S2, G2) to be forwarded through the default mLDP tunnel 13. The traffic of the second multicast stream (S2, G2) is forwarded from the source S2 (ND 101B) towards the receiver, ND 106, through the mLDP tunnel 13 in the MPLS network.
  • the embodiments of the present invention enable the creation of a single mLDP tunnel 13 that will be used for forwarding traffic for both the first and the second multicast streams (SI, Gl) and (S2, G2).
  • a VPN instance can be associated with multiple RD values.
  • a route distinguisher RD does not uniquely identify the VPN instance. Therefore if the mechanism above of transmitting an LDP FEC message including an RD is used, it will lead to the creation of several mLDP tunnels for forwarding traffic of a same VPN instance. While these embodiments still provide significant advantages when compared to the prior art approaches discussed with respect to Figure 1 , as multicast streams are forwarded through a default mLDP tunnel per RD, the embodiments described with reference to Figure 3 below, provide additional advantages by enabling forwarding of traffic of the VPN instance through a single mLDP tunnel.
  • FIG. 3 illustrates a block diagram of an exemplary scalable multicast VPN service enabled across an MPLS network, in accordance with some embodiments.
  • the use of a new extension to the LDP FEC message 32 enables the generating of the mLDP tunnel 33.
  • the mLDP tunnel 33 aggregates all multicast traffic received from various sources and destined towards various receivers of a same VPN instance.
  • the opaque value 420 of Figure 4C is used.
  • the opaque value 420 includes similar fields with the exception of field 418 which includes the VPN-ID of the VPN instance as opposed to the RD.
  • the VPN-ID is a global identifier that uniquely identifies the VPN instance.
  • the VPN-ID is defined in IETF RFC 2685.
  • the ND 104 upon receipt of an IP multicast event message for a first multicast stream (SI, Gl) or the second multicast stream (S2, G2), the ND 104 causes the generation of a single mLDP tunnel 33 capable of forwarding traffic of both multicast streams within the MPLS network towards the receiver ND 106.
  • the generation of the single mLDP tunnel 33 is caused by the transmission of the LDP FEC message with the opaque value including (VPN-ID, *, *).
  • the generation of the mLDP tunnel for all traffic of a VPN instance based upon the in-band signaling using the VPN-ID of a VPN instance enables the creation of a single mLDP tunnel 33 that will be used for forwarding traffic for both the first and the second multicast streams (SI, Gl) and (S2, G2), as opposed to standard approaches that provide a one to one correspondence between a multicast stream and the mLDP tunnel within the MPLS network 108 (see Figure 1). This significantly decreases the amount of forwarding states maintained within the MPLS network.
  • ND 104 receives a first IP multicast event message (e.g., message 11a) from a first network device (ND 105) of a first VPN instance.
  • the first IP multicast event message includes an identifier of the first VPN instance (e.g., a RD or a VPN-ID identifying the VPN instance) and an identification (e.g., a source address SI and/or a group address Gl) of a first multicast stream for which a first receiver (ND 106) of the first VPN instance requests to receive traffic.
  • the first network device is a customer equipment CE in a second site of the first VPN instance coupled with one or more receivers of multicast streams.
  • the CE couples the receivers with sources of the multicast streams through an MPLS network (e.g., MPLS network 108).
  • ND 104 causes the generation of a default multicast label distribution protocol (mLDP) tunnel (e.g., tunnel 13 or 33) for forwarding traffic of the first multicast stream (SI, Gl) from a first source (ND 101 A) to the first receiver (ND 106) of the first multicast stream through the MPLS network 108.
  • the MPLS network couples a first site (IP network 107) of the first VPN instance including the first source (ND 101 A) and a second site (IP network 109) of the first VPN instance including the first receiver (ND 106).
  • IP network 107 of the first VPN instance including the first source (ND 101 A)
  • IP network 109 of the first VPN instance including the first receiver (ND 106).
  • ND 104 receives a second IP multicast event message (e.g., 1 lb) from a second network device of the first VPN instance.
  • the second IP multicast event message (l ib) includes the identifier of the first VPN instance (e.g., an RD or a VPN-ID) and an identification of a second multicast stream (e.g., a source address S2 and/or a group address S2) for which a second receiver of the first VPN instance requests to receive traffic.
  • the second receiver is the same as the first receiver, ND 106, however in other embodiments, the second receiver can be different without departing from the scope of the present invention.
  • ND 104 determines whether the second multicast stream (S2, G2) and the first multicast stream (SI, Gl) include traffic for the first VPN instance. Responsive to determining that the second multicast stream (S2, G2) and the first multicast stream (SI, Gl) include traffic for the first VPN instance, ND 104 causes (at operation 512) packets of the second multicast stream to be forwarded through the default mLDP tunnel (e.g., mLDP tunnel 13 or 33).
  • the default mLDP tunnel e.g., mLDP tunnel 13 or 33.
  • the ND 104 causes the packets of the second multicast stream to be forwarded through the default mLDP tunnel by causing the configuration of the forwarding tables of the NDs of the MPLS network to include forwarding table entries for forwarding the packets through the default mLDP tunnel.
  • ND 104 receives (at operation 506), over the default mLDP tunnel, packets of the first and the second multicast streams to be forwarded towards the first and the second receivers respectively.
  • ND 104 when the ND 104 determines (at operation 510) that the second multicast stream (S2, G2) and the first multicast stream (SI, Gl) include traffic for different VPN instances, ND 104 causes (at operation 504) the generation of a new default mLDP tunnel (that is different from the first default mLDP tunnel associated with the first multicast stream) to be associated with the second multicast stream of the second VPN instance.
  • the ND 104 causes the packets of the second multicast stream to be forwarded through the new default mLDP tunnel by causing the configuration of the forwarding tables of the NDs of the MPLS network to include forwarding table entries for forwarding the packets through this second default mLDP tunnel instead of the first default mLDP tunnel.
  • a default mLDP tunnel e.g., tunnel 13 or tunnel 33
  • a default mLDP tunnel can cause some multicast streams to be forwarded towards PEs of the MPLS network that do not connect to any receivers of these multicast streams. This scenario may result in a waste of bandwidth on the path toward these PEs, as well as a waste of processing bandwidth at the PEs.
  • a dedicated mLDP tunnel can be generated for forwarding a given multicast stream in addition to generating the default mLDP tunnel for forwarding traffic of other multicast streams of a same VPN instance.
  • the dedicated mLDP tunnel can be used to forward traffic originating from high rate sources or alternatively from a source designated by an administrator of the multicast service.
  • Figure 8 illustrates a flow diagram of exemplary operations for enabling scalable multicast VPN service across an MPLS network where a dedicated mLDP tunnel is used to forward traffic of a given multicast stream, in accordance with some embodiments.
  • the operations 800 are performed once a default mLDP tunnel 61 has been generated to aggregate and forward all multicast streams of a VPN instance over the MPLS network 108.
  • multiple multicast streams are forwarded over the default mLDP tunnel 61 (e.g., traffic for (SI, Gl), (S2, G2) and (S3, G3) is forwarded towards the receivers ND 106 and ND 116).
  • the three multicast streams are forwarded towards the two egress PEs ND 104, and ND 114 even if the egress PEs do not serve receivers of each one of the streams. While all three multicast streams are forwarded towards ND 106 and ND 116, only ND 116 has requested to receive the traffic of multicast stream (S3, G3).
  • the traffic of the multicast stream (S3, G3) causes a waste in bandwidth and processing power within the MPLS network.
  • the ND 103 performs the exemplary operations of Figure 8.
  • the ND 103 monitors multiple multicast streams (e.g., (SI, Gl), (S2, G2), and (S3, G3)) transmitted from sources (ND 101A including source SI, ND 101B including source S2, and ND 101C including source S3) to receivers of a VPN instance over the default mLDP tunnel 61 of the MPLS network 108.
  • sources ND 101A including source SI
  • ND 101B including source S2
  • ND 101C including source S3
  • ND 103 may be configured to monitor the multicast streams and identify a predetermined multicast stream. An administrator may input an identification of a multicast stream that needs to be forwarded through a dedicated tunnel instead of being aggregated with the other multicast streams of the VPN instance.
  • ND 103 In response to determining (operation 804) that the first multicast stream (e.g., multicast stream (S3, G3)) from the plurality of multicast streams forwarded over the default mLDP tunnel does not satisfy a forwarding policy, ND 103 causes (operation 808) the generation of a dedicated mLDP tunnel for forwarding packets of the (S3, G3) multicast stream, and forwards (810) these packets through the dedicated mLDP tunnel 63 instead of using the default mLDP tunnel. ND 103 further forwards packets of the other multicast streams (e.g., the subset (SI, Gl) and (S2, G2)) through the default mLDP tunnel 61.
  • the other multicast streams e.g., the subset (SI, Gl) and (S2, G2)
  • control message 700 is a new type of PIM control message. It includes a field 702 indicating the type of message; a field 704 indicating the length of the control message; a reserved set of bits 706; a field 708 indicating the address of the source for which the dedicated mLDP tunnel is to be generated; and a field 710 indicating the address of the group for which the dedicated mLDP tunnel is to be generated.
  • the type of the message includes a new value that represents a request to generate a dedicated mLDP tunnel.
  • the ND 103 encapsulates the control message 700 and uses the destination address 224.0.0.13 to transmit the control message over the default mLDP tunnel 61 (not illustrated in Figure 6).
  • control message 700 When the control message 700 is received at the egress PEs of the MPLS network 108 (e.g., at ND 104), it decapsulates the control message and parses it. Based upon the destination group address (224.0.0.13), ND 104 punts the control message to the control plane.
  • the control plane can be a centralized or distributed control plane without departing from the scope of the present invention.
  • the message is then processed at an enhanced PIM module of the control plane and based upon the type 702, the enhanced PIM module recognizes the message as a request for generating a dedicated mLDP tunnel for the given source and group included in the control message 700.
  • ND 104 transmits (not illustrated in Figure 6) a LDP FEC message with an opaque value including an identifier of the VPN instance (RD), as well as an identification of the group and the source of the multicast stream (S3, G3).
  • RD VPN instance
  • S3, G3 an identification of the group and the source of the multicast stream
  • a receiver e.g., ND 106 causes the generation of an IP multicast tree and an mLDP tunnel associated with the IP multicast tree to forward traffic of a multicast stream through the MPLS network and towards the receiver.
  • ND 103 In order to enable the forwarding of traffic from the source towards the MPLS tunnel, i.e., to configure the network 107 to forward the multicast traffic to ND 103, several mechanisms can be used.
  • all the PEs of the MPLS network 108 are configured with anycast rendez -point (anycast RP). This mechanism causes the ND 103 to learn about the sources hosted at the network 107 as the particular PE (ND 103) is the nearest RP of these sources as determined by anycast RP.
  • the ND 103 can be statically configured to join the groups for which the source is behind that PE.
  • the embodiments of the present invention provide several mechanisms for enabling a scalable multicast VPN service using mLDP in-band signaling in BGP/MPLS service provider networks.
  • the various embodiments enable the aggregation of multiple multicast streams of a given VPN customer onto a single MP-LSP distribution tree (i.e., an mLDP tunnel).
  • Some embodiments described above enable the switch from the use of a default mLDP tunnel for all multicast streams to having a dedicated mLDP tunnel for a given multicast stream to separately transmit this multicast stream.
  • the embodiments enable a reduction in the number of states related to the multicast traffic that need to be stored and maintained at the MPLS network, consequently reducing the need for storage and processing resources.
  • the solution presented herein is highly scalable and enables a service provider to offer the multicast VPN service to multiple customers without a high burden on the network devices of the MPLS network.
  • the solution further avoids the use of heavy out-of-band signaling such as BGP/PIM. Further the solution avoids the use of a separate data plane and control plane for unicast and multicast traffic.
  • Figure 9A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention.
  • Figure 9A shows NDs 900A-H, and their connectivity by way of lines between 900A-900B, 900B-900C, 900C-900D, 900D-900E, 900E-900F, 900F-900G, and 900A-900G, as well as between 900H and each of 900A, 900C, 900D, and 900G.
  • NDs are physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link).
  • An additional line extending from NDs 900A, 900E, and 900F illustrates that these NDs act as ingress and egress points for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs).
  • Two of the exemplary ND implementations in Figure 9 A are: 1) a special-purpose network device 902 that uses custom application-specific integrated-circuits (ASICs) and a special-purpose operating system (OS); and 2) a general purpose network device 904 that uses common off-the-shelf (COTS) processors and a standard OS.
  • ASICs application-specific integrated-circuits
  • OS special-purpose operating system
  • COTS common off-the-shelf
  • the special-purpose network device 902 includes networking hardware 910 comprising a set of one or more processor(s) 912, forwarding resource(s) 914 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 916 (through which network connections are made, such as those shown by the connectivity between NDs 900 A-H), as well as non-transitory machine readable storage media 918 having stored therein networking software 920.
  • the networking software 920 may be executed by the networking hardware 910 to instantiate a set of one or more networking software instance(s) 922.
  • Each of the networking software instance(s) 922, and that part of the networking hardware 910 that executes that network software instance form a separate virtual network element 930A-R.
  • Each of the virtual network element(s) (VNEs) 930A-R includes a control communication and configuration module 932A-R (sometimes referred to as a local control module or control communication module) and forwarding table(s) 934A-R, such that a given virtual network element (e.g., 930A) includes the control communication and configuration module (e.g., 932A), a set of one or more forwarding table(s) (e.g., 934A), and that portion of the networking hardware 910 that executes the virtual network element (e.g., 930A).
  • the networking software 920 includes an enhanced mLDP module 921.
  • an enhanced mLDP module 921 may be executed by the networking hardware 910 to instantiate a set of one or more enhanced mLDP instances 931A-R which cause the ND 902 to perform the operations described with reference to Figures 2-8.
  • the special-purpose network device 902 is often physically and/or logically considered to include: 1) a ND control plane 924 (sometimes referred to as a control plane) comprising the processor(s) 912 that execute the control communication and configuration module(s) 932A-R; and 2) a ND forwarding plane 926 (sometimes referred to as a forwarding plane, a data plane, or a media plane) comprising the forwarding resource(s) 914 that utilize the forwarding table(s) 934A-R and the physical NIs 916.
  • a ND control plane 924 (sometimes referred to as a control plane) comprising the processor(s) 912 that execute the control communication and configuration module(s) 932A-R
  • a ND forwarding plane 926 sometimes referred to as a forwarding plane, a data plane, or a media plane
  • the forwarding resource(s) 914 that utilize the forwarding table(s) 934A-R and the physical NIs 916.
  • the ND control plane 924 (the processor(s) 912 executing the control communication and configuration module(s) 932A-R) is typically responsible for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) and storing that routing information in the forwarding table(s) 934A-R, and the ND forwarding plane 926 is responsible for receiving that data on the physical NIs 916 and forwarding that data out the appropriate ones of the physical NIs 916 based on the forwarding table(s) 934A-R.
  • data e.g., packets
  • the ND forwarding plane 926 is responsible for receiving that data on the physical NIs 916 and forwarding that data out the appropriate ones of the physical NIs 916 based on the forwarding table(s) 934A-R.
  • Figure 9B illustrates an exemplary way to implement the special-purpose network device 902 according to some embodiments of the invention.
  • Figure 9B shows a special- purpose network device including cards 938 (typically hot pluggable). While in some embodiments the cards 938 are of two types (one or more that operate as the ND forwarding plane 926 (sometimes called line cards), and one or more that operate to implement the ND control plane 924 (sometimes called control cards)), alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi-application card).
  • additional card types e.g., one additional type of card is called a service card, resource card, or multi-application card.
  • a service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)).
  • Layer 4 to Layer 7 services e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)
  • GPRS General Pack
  • the general purpose network device 904 includes hardware 940 comprising a set of one or more processor(s) 942 (which are often COTS processors) and physical NIs 946, as well as non-transitory machine readable storage media 948 having stored therein software 950.
  • the processor(s) 942 execute the software 950 to instantiate one or more sets of one or more applications 964A-R. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization.
  • the virtualization layer 954 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 962A-R called software containers that may each be used to execute one (or more) of the sets of applications 964A-R; where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from each other and separate from the kernel space in which the operating system is run; and where the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes.
  • the multiple software containers also called virtualization engines, virtual private servers, or jails
  • user spaces typically a virtual memory space
  • the virtualization layer 954 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and each of the sets of applications 964A-R is run on top of a guest operating system within an instance 962A-R called a virtual machine (which may in some cases be considered a tightly isolated form of software container) that is run on top of the hypervisor - the guest operating system and application may not know they are running on a virtual machine as opposed to running on a "bare metal" host electronic device, or through para-virtualization the operating system and/or application may be aware of the presence of virtualization for optimization purposes.
  • a hypervisor sometimes referred to as a virtual machine monitor (VMM)
  • VMM virtual machine monitor
  • one, some or all of the applications are implemented as unikernel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application.
  • libraries e.g., from a library operating system (LibOS) including drivers/libraries of OS services
  • unikernel can be implemented to run directly on hardware 940, directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container
  • embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer 954, unikernels running within software containers represented by instances 962A-R, or as a combination of unikernels and the above-described techniques (e.g. , unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers).
  • the software 950 includes an enhanced mLDP module 911.
  • the an enhanced mLDP module 951 may be executed by the hardware 940 to instantiate a set of one or more application(s) 964A-R which cause the ND 904 to perform the operations described with reference to Figures 2-8.
  • the instantiation of the one or more sets of one or more applications 964A-R, as well as virtualization if implemented, are collectively referred to as software instance(s) 952.
  • the virtual network element(s) 960A-R perform similar functionality to the virtual network element(s) 930A-R - e.g., similar to the control communication and configuration module(s) 932A and forwarding table(s) 934A (this virtualization of the hardware 940 is sometimes referred to as network function virtualization (NFV)).
  • NFV network function virtualization
  • CPE customer premise equipment
  • each instance 962A-R corresponding to one VNE 960A-R
  • alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of instances 962A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikernels are used.
  • the virtualization layer 954 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between instances 962A-R and the physical NI(s) 946, as well as optionally between the instances 962A-R; in addition, this virtual switch may enforce network isolation between the VNEs 960A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).
  • VLANs virtual local area networks
  • the third exemplary ND implementation in Figure 9A is a hybrid network device 906, which includes both custom ASICs/special-purpose OS and COTS processors/standard OS in a single ND or a single card within an ND.
  • a platform VM i.e., a VM that that implements the functionality of the special-purpose network device 902 could provide for para-virtualization to the networking hardware present in the hybrid network device 906.
  • NE network element
  • each of the VNEs receives data on the physical NIs (e.g., 916, 946) and forwards that data out the appropriate ones of the physical NIs (e.g., 916, 946).
  • a VNE implementing IP router functionality forwards IP packets on the basis of some of the IP header information in the IP packet; where IP header information includes source IP address, destination IP address, source port, destination port (where "source port" and
  • destination port refer herein to protocol ports, as opposed to physical ports of a ND), transport protocol (e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values.
  • transport protocol e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values.
  • the NDs of Figure 9A may form part of the Internet or a private network; and other electronic devices (not shown; such as end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances) may be coupled to the network (directly or through other networks such as access networks) to communicate over the network (e.g., the Internet or virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet) with each other (directly or through servers) and/or access content and/or services.
  • VOIP Voice Over Internet Protocol
  • VPNs virtual private networks
  • Such content and/or services are typically provided by one or more servers (not shown) belonging to a service/content provider or one or more end user devices (not shown) participating in a peer-to-peer (P2P) service, and may include, for example, public webpages (e.g., free content, store fronts, search services), private webpages (e.g.,
  • end user devices may be coupled (e.g., through customer premise equipment coupled to an access network (wired or wirelessly)) to edge NDs, which are coupled (e.g., through one or more core NDs) to other edge NDs, which are coupled to electronic devices acting as servers.
  • one or more of the electronic devices operating as the NDs in Figure 9A may also host one or more such servers (e.g., in the case of the general purpose network device 904, one or more of the software instances 962A-R may operate as servers; the same would be true for the hybrid network device 906; in the case of the special-purpose network device 902, one or more such servers could also be run on a virtualization layer executed by the processor(s) 912); in which case the servers are said to be co-located with the VNEs of that ND.
  • the servers are said to be co-located with the VNEs of that ND.
  • a virtual network is a logical abstraction of a physical network (such as that in Figure 9A) that provides network services (e.g., L2 and/or L3 services).
  • a virtual network can be implemented as an overlay network (sometimes referred to as a network virtualization overlay) that provides network services (e.g., layer 2 (L2, data link layer) and/or layer 3 (L3, network layer) services) over an underlay network (e.g., an L3 network, such as an Internet Protocol (IP) network that uses tunnels (e.g., generic routing encapsulation (GRE), layer 2 tunneling protocol (L2TP), IPSec) to create the overlay network).
  • IP Internet Protocol
  • a network virtualization edge sits at the edge of the underlay network and participates in implementing the network virtualization; the network-facing side of the NVE uses the underlay network to tunnel frames to and from other NVEs; the outward-facing side of the NVE sends and receives data to and from systems outside the network.
  • a virtual network instance is a specific instance of a virtual network on a NVE (e.g., a NE/VNE on an ND, a part of a NE/VNE on a ND where that NE/VNE is divided into multiple VNEs through emulation); one or more VNIs can be instantiated on an NVE (e.g., as different VNEs on an ND).
  • a virtual access point is a logical connection point on the NVE for connecting external systems to a virtual network; a VAP can be physical or virtual ports identified through logical interface identifiers (e.g., a VLAN ID).
  • Examples of network services include: 1) an Ethernet LAN emulation service (an Ethernet-based multipoint service similar to an Internet Engineering Task Force (IETF) Multiprotocol Label Switching (MPLS) or Ethernet VPN (EVPN) service) in which external systems are interconnected across the network by a LAN environment over the underlay network (e.g., an NVE provides separate L2 VNIs (virtual switching instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network); and 2) a virtualized IP forwarding service (similar to IETF IP VPN (e.g., Border Gateway Protocol (BGP)/MPLS IP VPN) from a service definition perspective) in which external systems are interconnected across the network by an L3 environment over the underlay network (e.g., an NVE provides separate L3 VNIs (forwarding and routing instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network)).
  • Network services may also include quality of service capabilities (e.g., traffic classification marking, traffic conditioning and scheduling), security capabilities (e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements), and management capabilities (e.g., full detection and processing).
  • quality of service capabilities e.g., traffic classification marking, traffic conditioning and scheduling
  • security capabilities e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements
  • management capabilities e.g., full detection and processing
  • FIG. 9C illustrates a network with a single network element on each of the NDs of Figure 9A, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.
  • Figure 9C illustrates network elements (NEs) 970A-H with the same connectivity as the NDs 900A-H of Figure 9A.
  • Figure 9C illustrates that the distributed approach 972 distributes responsibility for generating the reachability and forwarding information across the NEs 970A-H; in other words, the process of neighbor discovery and topology discovery is distributed.
  • the control communication and configuration module(s) 932A-R of the ND control plane 924 typically include a reachability and forwarding information module to implement one or more routing protocols (e.g., an exterior gateway protocol such as Border Gateway Protocol (BGP), Interior Gateway Protocol(s) (IGP) (e.g., Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Routing Information Protocol (RIP), Label Distribution Protocol (LDP), Resource Reservation Protocol (RSVP) (including RSVP-Traffic Engineering (TE): Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching
  • Border Gateway Protocol BGP
  • IGP Interior Gateway Protocol
  • OSPF Open Shortest Path First
  • IS-IS Intermediate System to Intermediate System
  • RIP Routing Information Protocol
  • LDP Label Distribution Protocol
  • RSVP Resource Reservation Protocol
  • the NEs 970A-H e.g., the processor(s) 912 executing the control communication and configuration module(s) 932A-R
  • the NEs 970A-H perform their responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by
  • Routes and adjacencies are stored in one or more routing structures (e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures) on the ND control plane 924.
  • the ND control plane 924 programs the ND forwarding plane 926 with information (e.g., adjacency and route information) based on the routing structure(s).
  • the ND control plane 924 programs the adjacency and route information into one or more forwarding table(s) 934A-R (e.g., Forwarding Information Base (FIB), Label Forwarding Information Base (LFIB), and one or more adjacency structures) on the ND forwarding plane 926.
  • the ND can store one or more bridging tables that are used to forward data based on the layer 2 information in that data. While the above example uses the special-purpose network device 902, the same distributed approach 972 can be implemented on the general purpose network device 904 and the hybrid network device 906.
  • FIG. 9C illustrates that a centralized approach 974 (also known as software defined networking (SDN)) that decouples the system that makes decisions about where traffic is sent from the underlying systems that forwards traffic to the selected destination.
  • the illustrated centralized approach 974 has the responsibility for the generation of reachability and forwarding information in a centralized control plane 976 (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus the process of neighbor discovery and topology discovery is centralized.
  • a centralized control plane 976 sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity
  • the centralized control plane 976 has a south bound interface 982 with a data plane 980 (sometime referred to the infrastructure layer, network forwarding plane, or forwarding plane (which should not be confused with a ND forwarding plane)) that includes the NEs 970A-H (sometimes referred to as switches, forwarding elements, data plane elements, or nodes).
  • the centralized control plane 976 includes a network controller 978, which includes a centralized reachability and forwarding information module 979 that determines the reachability within the network and distributes the forwarding information to the NEs 970A-H of the data plane 980 over the south bound interface 982 (which may use the OpenFlow protocol).
  • the network intelligence is centralized in the centralized control plane 976 executing on electronic devices that are typically separate from the NDs.
  • each of the control communication and configuration module(s) 932A-R of the ND control plane 924 typically include a control agent that provides the VNE side of the south bound interface 982.
  • the ND control plane 924 (the processor(s) 912 executing the control communication and configuration module(s) 932A-R) performs its responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) through the control agent communicating with the centralized control plane 976 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 979 (it should be understood that in some embodiments of the invention, the control communication and configuration module(s) 932A-R, in addition to communicating with the centralized control plane 976, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach; such embodiments are generally considered to fall under the centralized approach 974, but may also be considered a hybrid approach).
  • data e.g., packets
  • the control agent communicating with the centralized control plane 976 to receive the forwarding
  • the same centralized approach 974 can be implemented with the general purpose network device 904 (e.g., each of the VNE 960A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by communicating with the centralized control plane 976 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 979; it should be understood that in some embodiments of the invention, the VNEs 960A-R, in addition to communicating with the centralized control plane 976, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach) and the hybrid network device 906.
  • the general purpose network device 904 e.g., each of the VNE 960A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for
  • NFV is able to support SDN by providing an infrastructure upon which the SDN software can be run
  • NFV and SDN both aim to make use of commodity server hardware and physical switches.
  • Figure 9C also shows that the centralized control plane 976 has a north bound interface 984 to an application layer 986, in which resides application(s) 988.
  • the centralized control plane 976 has the ability to form virtual networks 992 (sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 970A-H of the data plane 980 being the underlay network)) for the application(s) 988.
  • virtual networks 992 sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 970A-H of the data plane 980 being the underlay network)
  • the centralized control plane 976 maintains a global view of all NDs and configured NEs/VNEs, and it maps the virtual networks to the underlying NDs efficiently (including maintaining these mappings as the physical network changes either through hardware (ND, link, or ND component) failure, addition, or removal).
  • Figure 9C shows the distributed approach 972 separate from the centralized approach 974
  • the effort of network control may be distributed differently or the two combined in certain embodiments of the invention.
  • embodiments may generally use the centralized approach (SDN) 974, but have certain functions delegated to the NEs (e.g., the distributed approach may be used to implement one or more of fault monitoring, performance monitoring, protection switching, and primitives for neighbor and/or topology discovery); or 2) embodiments of the invention may perform neighbor discovery and topology discovery via both the centralized control plane and the distributed protocols, and the results compared to raise exceptions where they do not agree.
  • SDN centralized approach
  • Such embodiments are generally considered to fall under the centralized approach 974, but may also be considered a hybrid approach.
  • Figure 9C illustrates the simple case where each of the NDs 900A-H implements a single NE 970A-H
  • the network control approaches described with reference to Figure 9C also work for networks where one or more of the NDs 900A-H implement multiple VNEs (e.g., VNEs 930A-R, VNEs 960A-R, those in the hybrid network device 906).
  • the network controller 978 may also emulate the implementation of multiple VNEs in a single ND.
  • the network controller 978 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 992 (all in the same one of the virtual network(s) 992, each in different ones of the virtual network(s) 992, or some combination).
  • the network controller 978 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 976 to present different VNEs in the virtual network(s) 992 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
  • a network interface may be physical or virtual; and in the context of IP, an interface address is an IP address assigned to a NI, be it a physical NI or virtual NI.
  • a virtual NI may be associated with a physical NI, with another virtual interface, or stand on its own (e.g., a loopback interface, a point-to-point protocol interface).
  • a NI physical or virtual
  • a loopback interface (and its loopback address) is a specific type of virtual NI (and IP address) of a NE/VNE (physical or virtual) often used for management purposes; where such an IP address is referred to as the nodal loopback address.
  • IP addresses of that ND are referred to as IP addresses of that ND; at a more granular level, the IP address(es) assigned to NI(s) assigned to a NE/VNE implemented on a ND can be referred to as IP addresses of that NE/VNE.
  • Some NDs provide support for implementing VPNs (Virtual Private Networks) (e.g., Layer 2 VPNs and/or Layer 3 VPNs).
  • VPNs Virtual Private Networks
  • the ND where a provider's network and a customer's network are coupled are respectively referred to as PEs (Provider Edge) and CEs (Customer Edge).
  • PEs Provide Edge
  • CEs Customer Edge
  • Layer 2 VPN forwarding typically is performed on the CE(s) on either end of the VPN and traffic is sent across the network (e.g., through one or more PEs coupled by other NDs).
  • Layer 2 circuits are configured between the CEs and PEs (e.g., an Ethernet port, an ATM permanent virtual circuit (PVC), a Frame Relay PVC).
  • PVC ATM permanent virtual circuit
  • Frame Relay PVC Frame Relay PVC
  • routing typically is performed by the PEs.
  • an edge ND that supports multiple VNEs may be deployed as a PE; and a VNE may be configured with a VPN protocol

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Methods and Apparatuses for enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network are described. Generation of a default multicast label distribution protocol (mLDP) tunnel for forwarding traffic of a first multicast stream from a first source to a first receiver of a first multicast stream through an MPLS network is caused. Upon determining that a second multicast stream and the first multicast stream include traffic for a first VPN instance, packets of the second multicast stream are caused to be forwarded through the default mLDP tunnel. Thus, the packets of the first and the second multicast streams are forwarded towards the first and the second receiver respectively through the default mLDP tunnel.

Description

METHOD AND APPARATUS FOR ENABLING A SCALABLE MULTICAST VIRTUAL PRIVATE NETWORK SERVICE ACROSS A MULTICAST LABEL DISTRIBUTION PROTOCOL NETWORK USING IN-BAND SIGNALING
TECHNICAL FIELD
[0001] Embodiments of the invention relate to the field of packet networks; and more specifically, to enabling a scalable multicast virtual private network service across a multicast label distribution protocol network using in-band signaling.
BACKGROUND
[0002] Border Gateway Protocol Multiprotocol Label Switching Virtual Private Network (BGP/MPLS VPN) networks offer a VPN service to customers that enable sites of a VPN network (e.g., enterprise network) to transport network traffic to other sites of the VPN network using an MPLS provider network. MVPN (Multicast in VPN) is a technology which enables transportation of multicast traffic from a first site of the VPN network including a source of a multicast traffic to other sites of the same VPN network including receivers of the multicast traffic, across a service provider network. Each of the provider's network and the VPN network include network devices (NDs) enabling the forwarding of the traffic from the source to the receivers of a multicast stream. For example, the NDs where a provider's network and a customer's network (e.g., a customer VPN) are coupled are respectively referred to as PEs (Provider Edge) and CEs (Customer Edge).
[0003] The Internet Engineering Task Force (IETF) in collaboration with the community of service providers developed a technology that enables the native support of IP multicast in MPLS networks by extending the Label Distribution Protocol (LDP). This technology is referred to as Label Switched Multicast (LSM) and Multicast Label Switched Path (LSP) distribution trees are called Multicast LDP (mLDP) tunnels. mLDP is documented in IETF Request for Comments (RFC) 6388. mLDP permits the creation of point-to-multipoint (P2MP) and multipoint-to-multipoint (MP2MP) multicast distribution trees within the MPLS network.
[0004] According to the mLDP specification, multicast packets are encapsulated in mLDP tunnels in the MPLS network and once they reach the end of the MPLS network, MPLS labels are de-capsulated and the inner multicast packets are forwarded as regular multicast packets in the IP domain. Two signaling mechanisms may be used to map an IP multicast stream to an mLDP tunnel: 1) In-band signaling, and 2) Out-of-band signaling. In a scenario of in-band signaling mapping, multicast stream information along with an identifier (e.g., an address) of the root of a P2MP or MP2MP multicast LSP distribution tree is carried in a field of a label Forwarding Equivalence Class (FEC) message within the MPLS core network. In a scenario of out-of-band signaling, multicast stream information is carried through out-of-band routing protocols such as Border Gateway Protocol (BGP), PIM, etc.
[0005] In a scenario based upon in-band signaling, each IP multicast steam in a given VPN network has an associated mLDP tunnel (or also referred to as an associated LSP tree) in the MPLS network. Thus, there is a one to one correspondence between the IP multicast streams that need to be forwarded for a VPN network and the mLDP tunnels that carry the multicast traffic within the MPLS network. This one to one correspondence between IP multicast streams and mLDP tunnels causes scalability challenges at the provider's network (i.e., the MPLS network). In fact, each PE of the provider's network typically supports several VPN customers (i.e., several VPN instances) and therefore would need to create and maintain states for a significant number of mLDP tunnels associated with these VPN customers. This causes an enormous load on the network devices of the MPLS network due to the number of states that need to be maintained for the mLDP tunnels.
[0006] Rosen MVPN (IETF RFC 6037) is a solution which realizes MVPN service using a concept of a multicast domain. In Rosen MVPN, PEs of a provider's network create trees between each other. The trees created can be of two types: Default Multicast Distribution Tree (MDT), or Data Multicast Distribution Tree (Data MDT). The type of the tree created determines the traffic that the tree carries and the network devices that join the tree.
[0007] In each multicast domain, a default MDT is defined. In this model, default MDT acts as a Local Area Network (LAN) interface that connects corresponding PEs of a VPN. This is done regardless of whether a CE coupled with that PE wants to join a particular multicast stream inside a VPN network. Thus, in this scenario, several multicast streams may be carried through the single default MDT tree over the MPLS network. The default tree carries: 1) control plane traffic and 2) low-rate data plane traffic for particular sources. The default MDT is constructed using a global multicast group address by running Protocol Independent Multicast (PIM) in the MPLS network. At the same time, customer signaling will also be done using PIM across the PEs. Default MDT will use GRE encapsulation in the data plane. All customer multicast streams will be encapsulated in the default MDT using GRE encapsulation and sent through the MPLS core network to egress PEs.
[0008] Rosen MVPN also enables service providers to create a separate MDT tree in the MPLS network for a given multicast stream using policy configuration called Data MDT (S- PMSI Selective Provider Multicast Service Interface). In this scenario, only the given multicast stream is transported on the Data MDT at the expense of extra states in the MPLS network. Data MDTs may be used for forwarding high rate multicast sources.
[0009] Service providers have been using the Rosen MVPN solution for quite some time to offer MVPN service in their networks. However, given that multicast streams are routed via the core routing protocol and not label switched (as it is the case for unicast traffic), the providers' networks need to be mindful of the core routing tables. This solution requires separate control plane and data plane for unicast traffic and multicast traffic in the MPLS core, adding significant complexity to the network.
SUMMARY
[0010] One general aspect includes a method of enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network. The method includes receiving a first internet protocol (IP) multicast event message from a first network device of a first VPN instance, where the first IP multicast event message includes an identifier of the first VPN instance and an identification of a first multicast stream for which a first receiver of the first VPN instance requests to receive traffic. The method also includes causing generation of a default multicast label distribution protocol (mLDP) tunnel for forwarding traffic of the first multicast stream from a first source to the first receiver of the first multicast stream through an MPLS network, where the MPLS network couples a first site of the first VPN instance including the first source and a second site of the first VPN instance including the first receiver. The method also includes receiving a second IP multicast event message from a second network device of the first VPN instance, where the second IP multicast event message includes the identifier of the first VPN instance and an identification of a second multicast stream for which a second receiver of the first VPN instance requests to receive traffic. The method also includes determining whether the second multicast stream and the first multicast stream include traffic for the first VPN instance; and responsive to determining that the second multicast stream and the first multicast stream include traffic within the first VPN instance, causing packets of the second multicast stream to be forwarded through the default mLDP tunnel. The method also includes receiving, over the default mLDP tunnel, packets of the first and the second multicast stream to be forwarded towards the first and the second receiver respectively.
[0011] A network device for enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network is described. The network device includes one or more processors; and non-transitory computer readable storage media storing instructions, which when executed by the one or more processors causes the network device to: receive a first internet protocol (IP) multicast event message from a first network device of a first VPN instance, where the first IP multicast event message includes an identifier of the first VPN instance and an identification of a first multicast stream for which a first receiver of the first VPN instance requests to receive traffic; cause generation of a default multicast label distribution protocol (mLDP) tunnel for forwarding traffic of the first multicast stream from a first source to the first receiver of the first multicast stream through an MPLS network, where the MPLS network couples a first site of the first VPN instance including the first source and a second site of the first VPN instance including the first receiver; receive a second IP multicast event message from a second network device of the first VPN instance, where the second IP multicast event message includes the identifier of the first VPN instance and an identification of a second multicast stream for which a second receiver of the first VPN instance requests to receive traffic; determine whether the second multicast stream and the first multicast stream include traffic for the first VPN instance; responsive to determining that the second multicast stream and the first multicast stream include traffic within the first VPN instance, cause packets of the second multicast stream to be forwarded through the default mLDP tunnel; and receive, over the default mLDP tunnel, packets of the first and the second multicast stream to be forwarded towards the first and the second receiver respectively.
[0012] One general aspect includes a non-transitory computer readable storage medium storing instructions, which when executed by a processor of a network device causes the network device to perform operations including receiving a first internet protocol (IP) multicast event message from a first network device of a first VPN instance, where the first IP multicast event message includes an identifier of the first VPN instance and an identification of a first multicast stream for which a first receiver of the first VPN instance requests to receive traffic; causing generation of a default multicast label distribution protocol (mLDP) tunnel for forwarding traffic of the first multicast stream from a first source to the first receiver of the first multicast stream through an MPLS network, where the MPLS network couples a first site of the first VPN instance including the first source and a second site of the first VPN instance including the first receiver; receiving a second IP multicast event message from a second network device of the first VPN instance, where the second IP multicast event message includes the identifier of the first VPN instance and an identification of a second multicast stream for which a second receiver of the first VPN instance requests to receive traffic; determining whether the second multicast stream and the first multicast stream include traffic for the first VPN instance; responsive to determining that the second multicast stream and the first multicast stream include traffic within the first VPN instance, causing packets of the second multicast stream to be forwarded through the default mLDP tunnel; and receiving, over the default mLDP tunnel, packets of the first and the second multicast stream to be forwarded towards the first and the second receiver respectively
[0013] One general aspect includes a method of enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network. The method including monitoring a plurality of multicast streams transmitted over a default multicast label distribution protocol (mLDP) tunnel, where the default mLDP tunnel is used to forward the plurality of multicast streams from sources to receivers of a VPN instance through an MPLS network; responsive to determining that a first multicast stream from the plurality of multicast streams forwarded over the default mLDP tunnel does not satisfy a forwarding policy, performing the following: causing generation of a dedicated mLDP tunnel for forwarding packets of the first multicast stream; forwarding packets of the first multicast stream through the dedicated mLDP tunnel; forwarding packets of a subset of the plurality of multicast streams through the default mLDP tunnel, where the subset of the plurality of multicast streams does not include the first multicast stream.
[0014] One general aspect includes a network device for enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network, the network device including one or more processors; and a non-transitory computer readable storage medium that stores instructions, which when executed by the one or more processors cause the network device to: monitor a plurality of multicast streams transmitted from sources to receivers of a VPN instance over a default multicast label distribution protocol (mLDP) tunnel of an MPLS network, responsive to determining that a first multicast stream from the plurality of multicast streams forwarded over the default mLDP tunnel does not satisfy a forwarding policy, perform the following: cause generation of a dedicated mLDP tunnel for forwarding packets of the first multicast stream; forward packets of the first multicast stream through the dedicated mLDP tunnel; and forward packets of a subset of the plurality of multicast streams through the default mLDP tunnel, where the subset of the plurality of multicast streams does not include the first multicast stream.
[0015] One general aspect includes a non-transitory computer readable storage medium storing instructions, which when executed by a processor of a network device cause the network device to perform operations including: monitoring a plurality of multicast streams transmitted over a default multicast label distribution protocol (mLDP) tunnel, where the default mLDP tunnel is used to forward the plurality of multicast streams from sources to receivers of a VPN instance through an MPLS network; responsive to determining that a first multicast stream from the plurality of multicast streams forwarded over the default mLDP tunnel does not satisfy a forwarding policy, performing the following: causing generation of a dedicated mLDP tunnel for forwarding packets of the first multicast stream; forwarding packets of the first multicast stream through the dedicated mLDP tunnel; and forwarding packets of a subset of the plurality of multicast streams through the default mLDP tunnel, where the subset of the plurality of multicast streams does not include the first multicast stream.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings:
[0017] Figure 1 illustrates a block diagram of an exemplary multicast VPN service enabled across an MPLS network, according to a standard embodiment.
[0018] Figure 2 illustrates a block diagram of an exemplary scalable multicast VPN service enabled across an MPLS network, in accordance with some embodiments.
[0019] Figure 3 illustrates a block diagram of an exemplary scalable multicast VPN service enabled across an MPLS network, in accordance with some embodiments.
[0020] Figure 4A illustrates an exemplary LDP FEC message to be transmitted from a network device for generating an mLDP tunnel across the MPLS network, in accordance with some embodiments.
[0021] Figure 4B illustrates an exemplary opaque value of the LDP FEC message in accordance with some embodiments.
[0022] Figure 4C illustrates an exemplary opaque value of the LDP FEC message in accordance with some embodiments.
[0023] Figure 5 illustrates a flow diagram of exemplary operations for enabling a multicast VPN service across an MPLS network, in accordance with some embodiments.
[0024] Figure 6 illustrates a block diagram of an exemplary scalable multicast VPN service enabled across an MPLS network where a dedicated mLDP tunnel is used to forward traffic of a given multicast stream, in accordance with some embodiments.
[0025] Figure 7 illustrates an exemplary control message for causing the generation of a dedicated mLDP tunnel for a multicast stream, in accordance with some embodiments.
[0026] Figure 8 illustrates a flow diagram of exemplary operations for enabling scalable multicast VPN service across an MPLS network where a dedicated mLDP tunnel is used to forward traffic of a given multicast stream, in accordance with some embodiments.
[0027] Figure 9A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention. [0028] Figure 9B illustrates an exemplary way to implement a special-purpose network device according to some embodiments of the invention.
[0029] Figure 9C illustrates a network with a single network element (NE) on each of the NDs, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.
DETAILED DESCRIPTION
[0030] The following description describes methods and apparatus for enabling a scalable multicast virtual private network service across a multicast label distribution protocol network. In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.
[0031] References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
[0032] Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot- dash, and dots) may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention. Throughout the following description similar reference numerals have been used to denote similar elements such as components, features of a system and/or operations performed in a system or element of the system, when applicable. [0033] In the following description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. "Coupled" is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. "Connected" is used to indicate the establishment of communication between two or more elements that are coupled with each other.
[0034] An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine -readable media (also called computer-readable media), such as machine -readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals). Thus, an electronic device (e.g., a computer) includes hardware and software, such as a set of one or more processors (e.g., wherein a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding) coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data. For instance, an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower nonvolatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device. Typical electronic devices also include a set or one or more physical network interface(s) (NI(s)) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices. For example, the set of physical NIs (or the set of physical NI(s) in combination with the set of processors executing code) may perform any formatting, coding, or translating to allow the electronic device to send and receive data whether over a wired and/or a wireless connection. In some embodiments, a physical NI may comprise radio circuitry capable of receiving data from other electronic devices over a wireless connection and/or sending data out to other devices via a wireless connection. This radio circuitry may include transmitter(s), receiver(s), and/or transceiver(s) suitable for radiofrequency communication. The radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc.). The radio signal may then be transmitted via antennas to the appropriate recipient(s). In some embodiments, the set of physical NI(s) may comprise network interface controller(s) (NICs), also known as a network interface card, network adapter, or local area network (LAN) adapter. The NIC(s) may facilitate in connecting the electronic device to other electronic devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC. One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
[0035] A network device (ND) is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices). Some network devices are "multiple services network devices" that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video).
[0036] Figure 1 illustrates a block diagram of an exemplary multicast VPN service enabled across an MPLS core network according to a standard approach. The illustrated scenario of Figure 1 , illustrates a prior art scenario based upon in-band signaling, where each IP multicast steam in a given VPN network has an associated mLDP tunnel in the core MPLS network.
[0037] In Figure 1, there is a one to one correspondence between the IP multicast streams, (SI, Gl) and (S2, G2), that need to be forwarded from a first site of a VPN network (e.g., from network 107) and the mLDP tunnels (1, and 2) that carry the multicast traffic within the MPLS network 108 towards a second site of the VPN network (e.g., network 109). Packets of the (SI, Gl) multicast stream that originate from the SI multicast source (e.g., ND 101A) are forwarded through the mLDP tunnel 1. Packets of the (S2, G2) multicast stream that originate from the S2 multicast source (e.g., ND 101B) is forwarded through the mLDP tunnel 2. The one-to-one correspondence between IP multicast streams and mLDP tunnels causes scalability challenges in the provider's network (i.e., the MPLS network 108). In fact, each PE of the provider's network typically supports several VPN customers (i.e., several VPN instances), which are not shown in Figure 1 , and therefore would need to create and maintain states for a significant number of mLDP tunnels associated with all of these VPN customers. This causes an enormous load on the network devices of the MPLS network due to the number of states that need to be maintained for the MPLS tunnels and greatly limits scalability of the multicast VPN service. Given that for each VPN customer, the provider's network needs to maintain a high number of mLDP tunnels (a tunnel for each multicast stream) the in-band signaling solution is inapplicable in VPN scenarios. [0038] The embodiments of the present invention provide a scalable solution for a multicast VPN service enabled across an MPLS core network. In a first embodiment, a framework is proposed for aggregating multicast streams of a VPN network to be forwarded through a single default mLDP tunnel generated according to an in-band signaling mechanism. In a second embodiment, mechanisms are proposed to enable an ingress PE of the MPLS network to forward packets of a given multicast stream through a separate dedicated mLDP tunnel, instead of using the default mLDP tunnel, upon determination that the multicast stream does not satisfy a policy requirement. As it will be described in further details below, the embodiments of the present invention leverage the advantages of an enhanced mLDP in-band signaling and provide a scalable service with respect to VPN multicast traffic aggregation while limiting the LSP states maintained at the core MPLS network.
[0039] Figure 2 illustrates a diagram of an exemplary scalable multicast VPN service enabled across an MPLS network, in accordance with some embodiments. The networks 107, 109, and 110 may include any number of CEs and NDs acting as sources or receivers of multicast traffic streams. In the illustrated example, the CEs of the networks 109 and 110 couple the receiver NDs (e.g., ND 106 or ND 116) with a PE (e.g., ND 104 or ND 114) of the MPLS network 108. The CEs of the network 107 couple the source NDs (e.g., ND 101A or ND 101B) with a PE (e.g., PE 103) of the MPLS network 108. Each CE or PE is a network device that can be implemented as described with reference to Figures 9A-C. In the illustrated embodiment, networks 107 and 109 are part of the same VPN instance that belongs to a customer of a service provider. The service provider is an administrator or an owner of the MPLS network 108 that provides multiple networking services to customers, in particular multicast VPN services.
[0040] The MPLS network 108 includes a set of network devices such as routers or switches forming a provider network that implements the MPLS protocol. In one non-limiting example, the MPLS network can be a core network of a cellular network coupled to an access network (e.g., network 109, network 110). In another example, the MPLS network 108 is an access network of the cellular network. The network 109 and optional network 110 include multicast receivers (ND 106 and ND 116) which are receivers of multicast content (e.g., one or more multicast streams) from a source (e.g., ND 101A or 101B) of a multicast stream. The networks 109 and 110 can include any number of receivers and the network 107 can include any number of sources without departing from the scope of the present invention. The sources of the multicast streams can be coupled through the MPLS network 108 to any number of CEs and receivers. These networks can interface through any number of PEs such as ND 103, ND 104, and ND 114. While, the example of Figure 2 illustrates a single VPN instance of a single customer (which includes several sites: network 107, network 109 and optional network 110), the MPLS network 108 can provide multicast VPN services to multiple customers (i.e., to multiple VPN instances) without departing from the scope of the current invention. The illustrated network of Figure 2 is simplified and will be described with respect to a single VPN instance (including the sites 107 and 109) for the sake of clarity only.
[0041] In Figure 2, ND 105 is coupled with receivers (e.g., ND 106) from the VPN instance that request to receive traffic of a first multicast stream (SI, Gl) and a second multicast stream (S2, G2). The ND 105 generates an IP multicast event message 11a (such as a PIM Join, Multicast Source Discovery Protocol (MSDP) Source Announcement (SA), BGP Source Active auto-discovery route or Rendezvous Point (RP) discovery). The message 11a includes an identification of the multicast stream (SI, Gl) and an identifier of the VPN instance to which the source and receiver of the multicast stream belong. Upon receipt of the message 11a, the ND 104 causes the generation of a default mLDP tunnel for forwarding the traffic of the first multicast stream (SI, Gl). While in this embodiment, the first multicast stream is identified by corresponding source and group addresses, in other embodiments, only the source address is used without departing from the scope of the present invention.
[0042] The ND 104 maintains within its forwarding tables a correspondence between IP multicast trees of the multicast streams and the mLDP tunnel created. Therefore upon receipt of the IP multicast event message 11a, the ND 104 keeps track of an association between the mLDP tunnel (which is to be generated) and the first multicast stream as identified by the source and group addresses (S1,G1) and an identifier of the VPN instance to which the source and receiver belong. The mLDP tunnel becomes part of the IP multicast tree associated with the multicast stream (SI, Gl).
[0043] In some embodiments, to generate the default mLDP tunnel, ND 104 performs an mLDP in-band signaling mechanism, in which multicast stream information is carried in an LDP Forwarding Equivalent Class (FEC) message through the MPLS network. In order to aggregate multiple multicast traffic streams of a VPN network for transmission over a single mLDP tunnel, the source and group addresses in the opaque value of the LDP FEC message causing the generation of the mLDP tunnel are set to wild cards (i.e., zeros). The opaque value of the LDP FEC message includes an identifier of the VPN instance. Figure 4A illustrates an exemplary LDP FEC message 400 to be transmitted from a network device for generating an mLDP tunnel across the MPLS network 108, in accordance with some embodiments. The LDP FEC message 400 includes field 402 including the address of the root of the mLDP tunnel to be generated. In the example of Figure 2, the root of the mLDP tunnel is ND 103, and the root address added to the LDP FEC message is the IP address of ND 103. The LDP FEC message further includes an opaque value field 404. Figure 4B illustrates an exemplary opaque value 406 of the LDP FEC message in accordance with one embodiment. The opaque value 406 includes a type 408 (identifying the type of the opaque value 406), length 410 (indicating the length of the opaque value 406), a field for the identification of the source of the multicast stream 412, a field for the identification of the group of the multicast stream 414, and a field for an identifier of the VPN network that is the route distinguisher (RD) 416. As illustrated in Figure 4B, the source and the group field are set to include a wildcard (*) therefore causing the mLDP tunnel to be generated to forward all multicast streams of the VPN network as identified with the route distinguisher RD.
[0044] Referring back to Figure 2, ND 104 transmits the LDP FEC message 12 with the opaque value including the route distinguisher (RD) and wildcards for the group and the source of the multicast stream. Based upon the LDP FEC message 12, all the receivers in network 109 of the VPN instance will trigger a single mLDP tunnel for carrying the multicast traffic from the source SI (ND 101A) across the MPLS network 108. The generated mLDP tunnel 13 (i.e., LSP distribution tree) includes a root (e.g., ND 103), zero or more transit network devices, and one or more leaves (e.g., ND 104, ND 114). The leaves initiate the mLDP tunnel setup and tear-down (e.g., ND 104 initiates the generation of the mLDP tunnel 13 by transmitting LDP FEC message 12 with opaque value (RD, *, *)) and install forwarding states to deliver the traffic received on the mLDP tunnel 13 to the receivers of the VPN instance within the network 109. As a result of the LDP FEC message 12, transit NDs (not illustrated in Figure 2) install MPLS forwarding states and propagate the mLDP setup (or tear-down) messages toward the root, ND 103. The root of the mLDP tunnel 13, ND 103, installs forwarding states to map traffic into the mLDP tunnel 13 from the sources of the multicast streams included in network 107.
[0045] In some embodiments, ND 104 further receives a second IP multicast event message (1 lb) from a second network device (e.g., ND 105) coupled to a second receiver of a second multicast stream (e.g., ND 106). In the illustrated example of Figure 2, the second network device and the second receiver of the second multicast stream (S2, G2) are the same as the CE and the receiver of the multicast stream (SI, Gl), however in other embodiments, these network devices may be different and part of the same VPN instance. The second IP multicast event message l ib includes an identifier of the VPN instance and an identification of the second multicast stream (S2, G2) of which the ND 106 requests to receive traffic. Upon receipt of the second IP multicast event message l ib, the ND 104 determines whether the second multicast stream relates to traffic for the first VPN instance for which an mLDP tunnel is already generated. In this example, the mLDP tunnel 13 is generated when the first IP multicast event message 11a is received. Therefore, in response to determining that the second multicast stream (S2, G2) relates to traffic for the first VPN instance, the ND 104 causes packets of the second multicast stream (S2, G2) to be forwarded through the default mLDP tunnel 13. The traffic of the second multicast stream (S2, G2) is forwarded from the source S2 (ND 101B) towards the receiver, ND 106, through the mLDP tunnel 13 in the MPLS network.
[0046] As opposed to standard approaches that provide a one to one correspondence between a multicast stream (and consequently its IP multicast tree) and the mLDP tunnel within the MPLS network 108 (see Figure 1), the embodiments of the present invention enable the creation of a single mLDP tunnel 13 that will be used for forwarding traffic for both the first and the second multicast streams (SI, Gl) and (S2, G2).
[0047] Multiple RD value for a single VPN instance:
[0048] In some embodiments, a VPN instance can be associated with multiple RD values. In these embodiments, a route distinguisher RD does not uniquely identify the VPN instance. Therefore if the mechanism above of transmitting an LDP FEC message including an RD is used, it will lead to the creation of several mLDP tunnels for forwarding traffic of a same VPN instance. While these embodiments still provide significant advantages when compared to the prior art approaches discussed with respect to Figure 1 , as multicast streams are forwarded through a default mLDP tunnel per RD, the embodiments described with reference to Figure 3 below, provide additional advantages by enabling forwarding of traffic of the VPN instance through a single mLDP tunnel.
[0049] Figure 3 illustrates a block diagram of an exemplary scalable multicast VPN service enabled across an MPLS network, in accordance with some embodiments. In the scenario of Figure 3, the use of a new extension to the LDP FEC message 32 enables the generating of the mLDP tunnel 33. The mLDP tunnel 33 aggregates all multicast traffic received from various sources and destined towards various receivers of a same VPN instance. In order to achieve the aggregation of multicast traffic per VPN instance, within the MPLS network, the opaque value 420 of Figure 4C is used. When compared to the opaque value 406 of Figure 4B, the opaque value 420 includes similar fields with the exception of field 418 which includes the VPN-ID of the VPN instance as opposed to the RD. The VPN-ID is a global identifier that uniquely identifies the VPN instance. The VPN-ID is defined in IETF RFC 2685. Thus, in Figure 3, upon receipt of an IP multicast event message for a first multicast stream (SI, Gl) or the second multicast stream (S2, G2), the ND 104 causes the generation of a single mLDP tunnel 33 capable of forwarding traffic of both multicast streams within the MPLS network towards the receiver ND 106. The generation of the single mLDP tunnel 33 is caused by the transmission of the LDP FEC message with the opaque value including (VPN-ID, *, *). Thus, similarly to the embodiments discussed above for Figure 2 and 4B, the generation of the mLDP tunnel for all traffic of a VPN instance based upon the in-band signaling using the VPN-ID of a VPN instance enables the creation of a single mLDP tunnel 33 that will be used for forwarding traffic for both the first and the second multicast streams (SI, Gl) and (S2, G2), as opposed to standard approaches that provide a one to one correspondence between a multicast stream and the mLDP tunnel within the MPLS network 108 (see Figure 1). This significantly decreases the amount of forwarding states maintained within the MPLS network.
[0050] The operations in the flow diagram of Figure 5 will be described with reference to the exemplary embodiments of the Figures 2-4C. However, it should be understood that the operations of the flow diagram of Figure 5 can be performed by embodiments of the invention other than those discussed with reference to the other figures, and the embodiments of the invention discussed with reference to these other figures can perform operations different than those discussed with reference to the flow diagram of Figure 5. The operations below will be described with reference to the egress PE, ND 104 of Figures 2-3. However, one of ordinary skill in the art would understand that this is intended to be exemplary only and the MPLS network may include other egress PEs coupled with other sites of the VPN instance or of other VPN instances, and the operations described below will be performed in these other PEs in a similar manner.
[0051] At operation 502, ND 104 receives a first IP multicast event message (e.g., message 11a) from a first network device (ND 105) of a first VPN instance. The first IP multicast event message includes an identifier of the first VPN instance (e.g., a RD or a VPN-ID identifying the VPN instance) and an identification (e.g., a source address SI and/or a group address Gl) of a first multicast stream for which a first receiver (ND 106) of the first VPN instance requests to receive traffic. In some embodiments, the first network device is a customer equipment CE in a second site of the first VPN instance coupled with one or more receivers of multicast streams. The CE couples the receivers with sources of the multicast streams through an MPLS network (e.g., MPLS network 108).
[0052] At operation 504, ND 104 causes the generation of a default multicast label distribution protocol (mLDP) tunnel (e.g., tunnel 13 or 33) for forwarding traffic of the first multicast stream (SI, Gl) from a first source (ND 101 A) to the first receiver (ND 106) of the first multicast stream through the MPLS network 108. The MPLS network couples a first site (IP network 107) of the first VPN instance including the first source (ND 101 A) and a second site (IP network 109) of the first VPN instance including the first receiver (ND 106).
[0053] At operation 508, ND 104 receives a second IP multicast event message (e.g., 1 lb) from a second network device of the first VPN instance. In the example of Figures 2-3, the second network device is the same as the first network device, ND 105, however in other embodiments, the second network device can be different without departing from the scope of the present invention. The second IP multicast event message (l ib) includes the identifier of the first VPN instance (e.g., an RD or a VPN-ID) and an identification of a second multicast stream (e.g., a source address S2 and/or a group address S2) for which a second receiver of the first VPN instance requests to receive traffic. In the example of Figures 2-3, the second receiver is the same as the first receiver, ND 106, however in other embodiments, the second receiver can be different without departing from the scope of the present invention.
[0054] At operation 510, ND 104 determines whether the second multicast stream (S2, G2) and the first multicast stream (SI, Gl) include traffic for the first VPN instance. Responsive to determining that the second multicast stream (S2, G2) and the first multicast stream (SI, Gl) include traffic for the first VPN instance, ND 104 causes (at operation 512) packets of the second multicast stream to be forwarded through the default mLDP tunnel (e.g., mLDP tunnel 13 or 33). The ND 104 causes the packets of the second multicast stream to be forwarded through the default mLDP tunnel by causing the configuration of the forwarding tables of the NDs of the MPLS network to include forwarding table entries for forwarding the packets through the default mLDP tunnel. As a result, ND 104 receives (at operation 506), over the default mLDP tunnel, packets of the first and the second multicast streams to be forwarded towards the first and the second receivers respectively.
[0055] In some embodiments, when the ND 104 determines (at operation 510) that the second multicast stream (S2, G2) and the first multicast stream (SI, Gl) include traffic for different VPN instances, ND 104 causes (at operation 504) the generation of a new default mLDP tunnel (that is different from the first default mLDP tunnel associated with the first multicast stream) to be associated with the second multicast stream of the second VPN instance. The ND 104 causes the packets of the second multicast stream to be forwarded through the new default mLDP tunnel by causing the configuration of the forwarding tables of the NDs of the MPLS network to include forwarding table entries for forwarding the packets through this second default mLDP tunnel instead of the first default mLDP tunnel.
[0056] Dedicated mLDP tunnel for a multicast stream:
[0057] In some embodiments, the use of a default mLDP tunnel (e.g., tunnel 13 or tunnel 33) to forward all multicast streams of a VPN can cause some multicast streams to be forwarded towards PEs of the MPLS network that do not connect to any receivers of these multicast streams. This scenario may result in a waste of bandwidth on the path toward these PEs, as well as a waste of processing bandwidth at the PEs. In one embodiment, in order to avoid the waste of bandwidth within the MPLS network, a dedicated mLDP tunnel can be generated for forwarding a given multicast stream in addition to generating the default mLDP tunnel for forwarding traffic of other multicast streams of a same VPN instance. For example, the dedicated mLDP tunnel can be used to forward traffic originating from high rate sources or alternatively from a source designated by an administrator of the multicast service. Figure 8 illustrates a flow diagram of exemplary operations for enabling scalable multicast VPN service across an MPLS network where a dedicated mLDP tunnel is used to forward traffic of a given multicast stream, in accordance with some embodiments.
[0058] The operations in the flow diagram of Figure 8 will be described with reference to the exemplary embodiments of the Figures 6-7. However, it should be understood that the operations of the flow diagram of Figure 8 can be performed by embodiments of the invention other than those discussed with reference to Figures 6-7, and the embodiments of the invention discussed with reference to Figures 6-7 can perform operations different than those discussed with reference to the flow diagram of Figure 8.
[0059] In some embodiments, the operations 800 are performed once a default mLDP tunnel 61 has been generated to aggregate and forward all multicast streams of a VPN instance over the MPLS network 108. In these embodiments multiple multicast streams are forwarded over the default mLDP tunnel 61 (e.g., traffic for (SI, Gl), (S2, G2) and (S3, G3) is forwarded towards the receivers ND 106 and ND 116). In one example (not illustrated), prior to creating a dedicated mLDP tunnel 63 for forwarding the traffic of multicast stream (S3, G3), the three multicast streams are forwarded towards the two egress PEs ND 104, and ND 114 even if the egress PEs do not serve receivers of each one of the streams. While all three multicast streams are forwarded towards ND 106 and ND 116, only ND 116 has requested to receive the traffic of multicast stream (S3, G3). Thus, prior to using the dedicated mLDP tunnel 63 for multicast stream (S3, G3), the traffic of the multicast stream (S3, G3) causes a waste in bandwidth and processing power within the MPLS network. In order to avoid this waste, in particular in cases of high rate traffic transmission, the ND 103 performs the exemplary operations of Figure 8. At operation 802, the ND 103 monitors multiple multicast streams (e.g., (SI, Gl), (S2, G2), and (S3, G3)) transmitted from sources (ND 101A including source SI, ND 101B including source S2, and ND 101C including source S3) to receivers of a VPN instance over the default mLDP tunnel 61 of the MPLS network 108. For example, ND 103 may monitor the transmission rate of each multicast stream being forwarded through the default mLDP tunnel 61, and may determine whether the transmission rate exceeds a predetermined threshold rate. In other embodiments, ND 103 may be configured to monitor the multicast streams and identify a predetermined multicast stream. An administrator may input an identification of a multicast stream that needs to be forwarded through a dedicated tunnel instead of being aggregated with the other multicast streams of the VPN instance. [0060] In response to determining (operation 804) that the first multicast stream (e.g., multicast stream (S3, G3)) from the plurality of multicast streams forwarded over the default mLDP tunnel does not satisfy a forwarding policy, ND 103 causes (operation 808) the generation of a dedicated mLDP tunnel for forwarding packets of the (S3, G3) multicast stream, and forwards (810) these packets through the dedicated mLDP tunnel 63 instead of using the default mLDP tunnel. ND 103 further forwards packets of the other multicast streams (e.g., the subset (SI, Gl) and (S2, G2)) through the default mLDP tunnel 61. This results in having the multicast stream (S3, G3) to be forwarded only towards the receivers that have requested to receive that multicast stream (e.g., ND 116), instead of all the receivers of the VPN instance. This significantly reduces the bandwidth usage when the multicast stream has a high rate for example.
[0061] In some embodiments, when the ingress PE, ND 103, determines that a given multicast stream does not satisfy the forwarding policy (e.g., the transmission rate of that multicast stream exceeds a predetermined threshold rate), it triggers the transmission of a control message 700 as illustrated in Figure 7. The control message 700 is a new type of PIM control message. It includes a field 702 indicating the type of message; a field 704 indicating the length of the control message; a reserved set of bits 706; a field 708 indicating the address of the source for which the dedicated mLDP tunnel is to be generated; and a field 710 indicating the address of the group for which the dedicated mLDP tunnel is to be generated. The type of the message includes a new value that represents a request to generate a dedicated mLDP tunnel. The ND 103 encapsulates the control message 700 and uses the destination address 224.0.0.13 to transmit the control message over the default mLDP tunnel 61 (not illustrated in Figure 6).
[0062] When the control message 700 is received at the egress PEs of the MPLS network 108 (e.g., at ND 104), it decapsulates the control message and parses it. Based upon the destination group address (224.0.0.13), ND 104 punts the control message to the control plane. The control plane can be a centralized or distributed control plane without departing from the scope of the present invention. The message is then processed at an enhanced PIM module of the control plane and based upon the type 702, the enhanced PIM module recognizes the message as a request for generating a dedicated mLDP tunnel for the given source and group included in the control message 700.
[0063] Once the control message 700 is processed at the control plane of ND 104, a new mLDP tunnel is caused to be generated using a similar process as the one used to generate the default mLDP tunnel. ND 104 transmits (not illustrated in Figure 6) a LDP FEC message with an opaque value including an identifier of the VPN instance (RD), as well as an identification of the group and the source of the multicast stream (S3, G3). This causes the ingress PE, ND 103 to use the dedicated mLDP tunnel 63 for forwarding the traffic of the multicast stream (S3, G3) instead of using the default mLDP tunnel 61, while using the default mLDP tunnel 61 for all other multicast streams.
[0064] Coupling the mLDP tunnel with the sources of the multicast streams:
[0065] In the embodiments described above, a receiver (e.g., ND 106) causes the generation of an IP multicast tree and an mLDP tunnel associated with the IP multicast tree to forward traffic of a multicast stream through the MPLS network and towards the receiver. In order to enable the forwarding of traffic from the source towards the MPLS tunnel, i.e., to configure the network 107 to forward the multicast traffic to ND 103, several mechanisms can be used. In one embodiment, all the PEs of the MPLS network 108 are configured with anycast rendez -point (anycast RP). This mechanism causes the ND 103 to learn about the sources hosted at the network 107 as the particular PE (ND 103) is the nearest RP of these sources as determined by anycast RP. In another embodiment, the ND 103 can be statically configured to join the groups for which the source is behind that PE.
[0066] The embodiments of the present invention provide several mechanisms for enabling a scalable multicast VPN service using mLDP in-band signaling in BGP/MPLS service provider networks. The various embodiments enable the aggregation of multiple multicast streams of a given VPN customer onto a single MP-LSP distribution tree (i.e., an mLDP tunnel). Some embodiments described above enable the switch from the use of a default mLDP tunnel for all multicast streams to having a dedicated mLDP tunnel for a given multicast stream to separately transmit this multicast stream.
[0067] The embodiments enable a reduction in the number of states related to the multicast traffic that need to be stored and maintained at the MPLS network, consequently reducing the need for storage and processing resources. The solution presented herein is highly scalable and enables a service provider to offer the multicast VPN service to multiple customers without a high burden on the network devices of the MPLS network. The solution further avoids the use of heavy out-of-band signaling such as BGP/PIM. Further the solution avoids the use of a separate data plane and control plane for unicast and multicast traffic.
[0068] Architecture:
[0069] Each one of the NDs described in the preceding Figures 1-8, can be implemented according to one or more embodiments described with respect to Figures 9A-C below. Figure 9A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention. Figure 9A shows NDs 900A-H, and their connectivity by way of lines between 900A-900B, 900B-900C, 900C-900D, 900D-900E, 900E-900F, 900F-900G, and 900A-900G, as well as between 900H and each of 900A, 900C, 900D, and 900G. These NDs are physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link). An additional line extending from NDs 900A, 900E, and 900F illustrates that these NDs act as ingress and egress points for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs).
[0070] Two of the exemplary ND implementations in Figure 9 A are: 1) a special-purpose network device 902 that uses custom application-specific integrated-circuits (ASICs) and a special-purpose operating system (OS); and 2) a general purpose network device 904 that uses common off-the-shelf (COTS) processors and a standard OS.
[0071] The special-purpose network device 902 includes networking hardware 910 comprising a set of one or more processor(s) 912, forwarding resource(s) 914 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 916 (through which network connections are made, such as those shown by the connectivity between NDs 900 A-H), as well as non-transitory machine readable storage media 918 having stored therein networking software 920. During operation, the networking software 920 may be executed by the networking hardware 910 to instantiate a set of one or more networking software instance(s) 922. Each of the networking software instance(s) 922, and that part of the networking hardware 910 that executes that network software instance (be it hardware dedicated to that networking software instance and/or time slices of hardware temporally shared by that networking software instance with others of the networking software instance(s) 922), form a separate virtual network element 930A-R. Each of the virtual network element(s) (VNEs) 930A-R includes a control communication and configuration module 932A-R (sometimes referred to as a local control module or control communication module) and forwarding table(s) 934A-R, such that a given virtual network element (e.g., 930A) includes the control communication and configuration module (e.g., 932A), a set of one or more forwarding table(s) (e.g., 934A), and that portion of the networking hardware 910 that executes the virtual network element (e.g., 930A). The networking software 920 includes an enhanced mLDP module 921. During operation, the an enhanced mLDP module 921 may be executed by the networking hardware 910 to instantiate a set of one or more enhanced mLDP instances 931A-R which cause the ND 902 to perform the operations described with reference to Figures 2-8.
[0072] The special-purpose network device 902 is often physically and/or logically considered to include: 1) a ND control plane 924 (sometimes referred to as a control plane) comprising the processor(s) 912 that execute the control communication and configuration module(s) 932A-R; and 2) a ND forwarding plane 926 (sometimes referred to as a forwarding plane, a data plane, or a media plane) comprising the forwarding resource(s) 914 that utilize the forwarding table(s) 934A-R and the physical NIs 916. By way of example, where the ND is a router (or is implementing routing functionality), the ND control plane 924 (the processor(s) 912 executing the control communication and configuration module(s) 932A-R) is typically responsible for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) and storing that routing information in the forwarding table(s) 934A-R, and the ND forwarding plane 926 is responsible for receiving that data on the physical NIs 916 and forwarding that data out the appropriate ones of the physical NIs 916 based on the forwarding table(s) 934A-R.
[0073] Figure 9B illustrates an exemplary way to implement the special-purpose network device 902 according to some embodiments of the invention. Figure 9B shows a special- purpose network device including cards 938 (typically hot pluggable). While in some embodiments the cards 938 are of two types (one or more that operate as the ND forwarding plane 926 (sometimes called line cards), and one or more that operate to implement the ND control plane 924 (sometimes called control cards)), alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi-application card). A service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)). By way of example, a service card may be used to terminate IPsec tunnels and execute the attendant authentication and encryption algorithms. These cards are coupled together through one or more interconnect mechanisms illustrated as backplane 936 (e.g., a first full mesh coupling the line cards and a second full mesh coupling all of the cards).
[0074] Returning to Figure 9A, the general purpose network device 904 includes hardware 940 comprising a set of one or more processor(s) 942 (which are often COTS processors) and physical NIs 946, as well as non-transitory machine readable storage media 948 having stored therein software 950. During operation, the processor(s) 942 execute the software 950 to instantiate one or more sets of one or more applications 964A-R. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization. For example, in one such alternative embodiment the virtualization layer 954 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 962A-R called software containers that may each be used to execute one (or more) of the sets of applications 964A-R; where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from each other and separate from the kernel space in which the operating system is run; and where the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes. In another such alternative embodiment the virtualization layer 954 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and each of the sets of applications 964A-R is run on top of a guest operating system within an instance 962A-R called a virtual machine (which may in some cases be considered a tightly isolated form of software container) that is run on top of the hypervisor - the guest operating system and application may not know they are running on a virtual machine as opposed to running on a "bare metal" host electronic device, or through para-virtualization the operating system and/or application may be aware of the presence of virtualization for optimization purposes. In yet other alternative embodiments, one, some or all of the applications are implemented as unikernel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application. As a unikernel can be implemented to run directly on hardware 940, directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container, embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer 954, unikernels running within software containers represented by instances 962A-R, or as a combination of unikernels and the above-described techniques (e.g. , unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers). The software 950 includes an enhanced mLDP module 911. During operation, the an enhanced mLDP module 951 may be executed by the hardware 940 to instantiate a set of one or more application(s) 964A-R which cause the ND 904 to perform the operations described with reference to Figures 2-8.
[0075] The instantiation of the one or more sets of one or more applications 964A-R, as well as virtualization if implemented, are collectively referred to as software instance(s) 952. Each set of applications 964A-R, corresponding virtualization construct (e.g., instance 962A-R) if implemented, and that part of the hardware 940 that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared), forms a separate virtual network element(s) 960A-R.
[0076] The virtual network element(s) 960A-R perform similar functionality to the virtual network element(s) 930A-R - e.g., similar to the control communication and configuration module(s) 932A and forwarding table(s) 934A (this virtualization of the hardware 940 is sometimes referred to as network function virtualization (NFV)). Thus, NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in Data centers, NDs, and customer premise equipment (CPE). While embodiments of the invention are illustrated with each instance 962A-R corresponding to one VNE 960A-R, alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of instances 962A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikernels are used.
[0077] In certain embodiments, the virtualization layer 954 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between instances 962A-R and the physical NI(s) 946, as well as optionally between the instances 962A-R; in addition, this virtual switch may enforce network isolation between the VNEs 960A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).
[0078] The third exemplary ND implementation in Figure 9A is a hybrid network device 906, which includes both custom ASICs/special-purpose OS and COTS processors/standard OS in a single ND or a single card within an ND. In certain embodiments of such a hybrid network device, a platform VM (i.e., a VM that that implements the functionality of the special-purpose network device 902) could provide for para-virtualization to the networking hardware present in the hybrid network device 906.
[0079] Regardless of the above exemplary implementations of an ND, when a single one of multiple VNEs implemented by an ND is being considered (e.g., only one of the VNEs is part of a given virtual network) or where only a single VNE is currently being implemented by an ND, the shortened term network element (NE) is sometimes used to refer to that VNE. Also in all of the above exemplary implementations, each of the VNEs (e.g., VNE(s) 930A-R, VNEs 960A-R, and those in the hybrid network device 906) receives data on the physical NIs (e.g., 916, 946) and forwards that data out the appropriate ones of the physical NIs (e.g., 916, 946). For example, a VNE implementing IP router functionality forwards IP packets on the basis of some of the IP header information in the IP packet; where IP header information includes source IP address, destination IP address, source port, destination port (where "source port" and
"destination port" refer herein to protocol ports, as opposed to physical ports of a ND), transport protocol (e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values.
[0080] The NDs of Figure 9A, for example, may form part of the Internet or a private network; and other electronic devices (not shown; such as end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances) may be coupled to the network (directly or through other networks such as access networks) to communicate over the network (e.g., the Internet or virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet) with each other (directly or through servers) and/or access content and/or services. Such content and/or services are typically provided by one or more servers (not shown) belonging to a service/content provider or one or more end user devices (not shown) participating in a peer-to-peer (P2P) service, and may include, for example, public webpages (e.g., free content, store fronts, search services), private webpages (e.g.,
username/password accessed webpages providing email services), and/or corporate networks over VPNs. For instance, end user devices may be coupled (e.g., through customer premise equipment coupled to an access network (wired or wirelessly)) to edge NDs, which are coupled (e.g., through one or more core NDs) to other edge NDs, which are coupled to electronic devices acting as servers. However, through compute and storage virtualization, one or more of the electronic devices operating as the NDs in Figure 9A may also host one or more such servers (e.g., in the case of the general purpose network device 904, one or more of the software instances 962A-R may operate as servers; the same would be true for the hybrid network device 906; in the case of the special-purpose network device 902, one or more such servers could also be run on a virtualization layer executed by the processor(s) 912); in which case the servers are said to be co-located with the VNEs of that ND.
[0081] A virtual network is a logical abstraction of a physical network (such as that in Figure 9A) that provides network services (e.g., L2 and/or L3 services). A virtual network can be implemented as an overlay network (sometimes referred to as a network virtualization overlay) that provides network services (e.g., layer 2 (L2, data link layer) and/or layer 3 (L3, network layer) services) over an underlay network (e.g., an L3 network, such as an Internet Protocol (IP) network that uses tunnels (e.g., generic routing encapsulation (GRE), layer 2 tunneling protocol (L2TP), IPSec) to create the overlay network).
[0082] A network virtualization edge (NVE) sits at the edge of the underlay network and participates in implementing the network virtualization; the network-facing side of the NVE uses the underlay network to tunnel frames to and from other NVEs; the outward-facing side of the NVE sends and receives data to and from systems outside the network. A virtual network instance (VNI) is a specific instance of a virtual network on a NVE (e.g., a NE/VNE on an ND, a part of a NE/VNE on a ND where that NE/VNE is divided into multiple VNEs through emulation); one or more VNIs can be instantiated on an NVE (e.g., as different VNEs on an ND). A virtual access point (VAP) is a logical connection point on the NVE for connecting external systems to a virtual network; a VAP can be physical or virtual ports identified through logical interface identifiers (e.g., a VLAN ID).
[0083] Examples of network services include: 1) an Ethernet LAN emulation service (an Ethernet-based multipoint service similar to an Internet Engineering Task Force (IETF) Multiprotocol Label Switching (MPLS) or Ethernet VPN (EVPN) service) in which external systems are interconnected across the network by a LAN environment over the underlay network (e.g., an NVE provides separate L2 VNIs (virtual switching instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network); and 2) a virtualized IP forwarding service (similar to IETF IP VPN (e.g., Border Gateway Protocol (BGP)/MPLS IP VPN) from a service definition perspective) in which external systems are interconnected across the network by an L3 environment over the underlay network (e.g., an NVE provides separate L3 VNIs (forwarding and routing instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network)). Network services may also include quality of service capabilities (e.g., traffic classification marking, traffic conditioning and scheduling), security capabilities (e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements), and management capabilities (e.g., full detection and processing).
[0084] Fig. 9C illustrates a network with a single network element on each of the NDs of Figure 9A, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention. Specifically, Figure 9C illustrates network elements (NEs) 970A-H with the same connectivity as the NDs 900A-H of Figure 9A.
[0085] Figure 9C illustrates that the distributed approach 972 distributes responsibility for generating the reachability and forwarding information across the NEs 970A-H; in other words, the process of neighbor discovery and topology discovery is distributed.
[0086] For example, where the special-purpose network device 902 is used, the control communication and configuration module(s) 932A-R of the ND control plane 924 typically include a reachability and forwarding information module to implement one or more routing protocols (e.g., an exterior gateway protocol such as Border Gateway Protocol (BGP), Interior Gateway Protocol(s) (IGP) (e.g., Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Routing Information Protocol (RIP), Label Distribution Protocol (LDP), Resource Reservation Protocol (RSVP) (including RSVP-Traffic Engineering (TE): Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching
(GMPLS) Signaling RSVP-TE)) that communicate with other NEs to exchange routes, and then selects those routes based on one or more routing metrics. Thus, the NEs 970A-H (e.g., the processor(s) 912 executing the control communication and configuration module(s) 932A-R) perform their responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by
distributively determining the reachability within the network and calculating their respective forwarding information. Routes and adjacencies are stored in one or more routing structures (e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures) on the ND control plane 924. The ND control plane 924 programs the ND forwarding plane 926 with information (e.g., adjacency and route information) based on the routing structure(s). For example, the ND control plane 924 programs the adjacency and route information into one or more forwarding table(s) 934A-R (e.g., Forwarding Information Base (FIB), Label Forwarding Information Base (LFIB), and one or more adjacency structures) on the ND forwarding plane 926. For layer 2 forwarding, the ND can store one or more bridging tables that are used to forward data based on the layer 2 information in that data. While the above example uses the special-purpose network device 902, the same distributed approach 972 can be implemented on the general purpose network device 904 and the hybrid network device 906.
[0087] Figure 9C illustrates that a centralized approach 974 (also known as software defined networking (SDN)) that decouples the system that makes decisions about where traffic is sent from the underlying systems that forwards traffic to the selected destination. The illustrated centralized approach 974 has the responsibility for the generation of reachability and forwarding information in a centralized control plane 976 (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus the process of neighbor discovery and topology discovery is centralized. The centralized control plane 976 has a south bound interface 982 with a data plane 980 (sometime referred to the infrastructure layer, network forwarding plane, or forwarding plane (which should not be confused with a ND forwarding plane)) that includes the NEs 970A-H (sometimes referred to as switches, forwarding elements, data plane elements, or nodes). The centralized control plane 976 includes a network controller 978, which includes a centralized reachability and forwarding information module 979 that determines the reachability within the network and distributes the forwarding information to the NEs 970A-H of the data plane 980 over the south bound interface 982 (which may use the OpenFlow protocol). Thus, the network intelligence is centralized in the centralized control plane 976 executing on electronic devices that are typically separate from the NDs.
[0088] For example, where the special-purpose network device 902 is used in the data plane 980, each of the control communication and configuration module(s) 932A-R of the ND control plane 924 typically include a control agent that provides the VNE side of the south bound interface 982. In this case, the ND control plane 924 (the processor(s) 912 executing the control communication and configuration module(s) 932A-R) performs its responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) through the control agent communicating with the centralized control plane 976 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 979 (it should be understood that in some embodiments of the invention, the control communication and configuration module(s) 932A-R, in addition to communicating with the centralized control plane 976, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach; such embodiments are generally considered to fall under the centralized approach 974, but may also be considered a hybrid approach).
[0089] While the above example uses the special-purpose network device 902, the same centralized approach 974 can be implemented with the general purpose network device 904 (e.g., each of the VNE 960A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by communicating with the centralized control plane 976 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 979; it should be understood that in some embodiments of the invention, the VNEs 960A-R, in addition to communicating with the centralized control plane 976, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach) and the hybrid network device 906. In fact, the use of SDN techniques can enhance the NFV techniques typically used in the general purpose network device 904 or hybrid network device 906 implementations as NFV is able to support SDN by providing an infrastructure upon which the SDN software can be run, and NFV and SDN both aim to make use of commodity server hardware and physical switches.
[0090] Figure 9C also shows that the centralized control plane 976 has a north bound interface 984 to an application layer 986, in which resides application(s) 988. The centralized control plane 976 has the ability to form virtual networks 992 (sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 970A-H of the data plane 980 being the underlay network)) for the application(s) 988. Thus, the centralized control plane 976 maintains a global view of all NDs and configured NEs/VNEs, and it maps the virtual networks to the underlying NDs efficiently (including maintaining these mappings as the physical network changes either through hardware (ND, link, or ND component) failure, addition, or removal).
[0091] While Figure 9C shows the distributed approach 972 separate from the centralized approach 974, the effort of network control may be distributed differently or the two combined in certain embodiments of the invention. For example: 1) embodiments may generally use the centralized approach (SDN) 974, but have certain functions delegated to the NEs (e.g., the distributed approach may be used to implement one or more of fault monitoring, performance monitoring, protection switching, and primitives for neighbor and/or topology discovery); or 2) embodiments of the invention may perform neighbor discovery and topology discovery via both the centralized control plane and the distributed protocols, and the results compared to raise exceptions where they do not agree. Such embodiments are generally considered to fall under the centralized approach 974, but may also be considered a hybrid approach.
[0092] While Figure 9C illustrates the simple case where each of the NDs 900A-H implements a single NE 970A-H, it should be understood that the network control approaches described with reference to Figure 9C also work for networks where one or more of the NDs 900A-H implement multiple VNEs (e.g., VNEs 930A-R, VNEs 960A-R, those in the hybrid network device 906). Alternatively or in addition, the network controller 978 may also emulate the implementation of multiple VNEs in a single ND. Specifically, instead of (or in addition to) implementing multiple VNEs in a single ND, the network controller 978 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 992 (all in the same one of the virtual network(s) 992, each in different ones of the virtual network(s) 992, or some combination). For example, the network controller 978 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 976 to present different VNEs in the virtual network(s) 992 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
[0093] A network interface (NI) may be physical or virtual; and in the context of IP, an interface address is an IP address assigned to a NI, be it a physical NI or virtual NI. A virtual NI may be associated with a physical NI, with another virtual interface, or stand on its own (e.g., a loopback interface, a point-to-point protocol interface). A NI (physical or virtual) may be numbered (a NI with an IP address) or unnumbered (a NI without an IP address). A loopback interface (and its loopback address) is a specific type of virtual NI (and IP address) of a NE/VNE (physical or virtual) often used for management purposes; where such an IP address is referred to as the nodal loopback address. The IP address(es) assigned to the NI(s) of a ND are referred to as IP addresses of that ND; at a more granular level, the IP address(es) assigned to NI(s) assigned to a NE/VNE implemented on a ND can be referred to as IP addresses of that NE/VNE.
[0094] Some NDs provide support for implementing VPNs (Virtual Private Networks) (e.g., Layer 2 VPNs and/or Layer 3 VPNs). For example, the ND where a provider's network and a customer's network are coupled are respectively referred to as PEs (Provider Edge) and CEs (Customer Edge). In a Layer 2 VPN, forwarding typically is performed on the CE(s) on either end of the VPN and traffic is sent across the network (e.g., through one or more PEs coupled by other NDs). Layer 2 circuits are configured between the CEs and PEs (e.g., an Ethernet port, an ATM permanent virtual circuit (PVC), a Frame Relay PVC). In a Layer 3 VPN, routing typically is performed by the PEs. By way of example, an edge ND that supports multiple VNEs may be deployed as a PE; and a VNE may be configured with a VPN protocol, and thus that VNE is referred as a VPN VNE.
[0095] While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.
[0096] For example, while the flow diagrams in the figures show a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is exemplary (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.).

Claims

CLAIMS What is claimed is:
1. A method of enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network, the method comprising:
receiving (502) a first Internet Protocol (IP) multicast event message from a first network device of a first VPN instance, wherein the first IP multicast event message includes an identifier of the first VPN instance and an identification of a first multicast stream for which a first receiver of the first VPN instance requests to receive traffic;
causing (504) generation of a default multicast label distribution protocol (mLDP) tunnel for forwarding traffic of the first multicast stream from a first source to the first receiver of the first multicast stream through an MPLS network, wherein the MPLS network couples a first site of the first VPN instance including the first source and a second site of the first VPN instance including the first receiver; receiving (508) a second IP multicast event message from a second network device of the first VPN instance, wherein the second IP multicast event message includes the identifier of the first VPN instance and an identification of a second multicast stream for which a second receiver of the first VPN instance requests to receive traffic;
determining (510) whether the second multicast stream and the first multicast stream include traffic for the first VPN instance;
responsive to determining that the second multicast stream and the first multicast stream include traffic within the first VPN instance, causing (512) packets of the second multicast stream to be forwarded through the default mLDP tunnel; and receiving (506), over the default mLDP tunnel, packets of the first and the second
multicast stream to be forwarded towards the first and the second receiver respectively.
2. The method of claim 1, wherein the causing the generation includes transmitting an in- band signaling label distribution protocol (LDP) forwarding equivalent class (FEC) message.
3. The method of claim 2, wherein the LDP FEC message includes an opaque value field including the identifier of the first VPN instance, and a wildcard as a value of a group address and a source address.
4. The method of claim 3, wherein the identifier of the first VPN instance is a route distinguisher.
5. The method of claim 3, wherein the identifier of the first VPN instance is a VPN-ID.
6. The method of claim 1, wherein the causing the generation includes adding in a forwarding table a correspondence between a first IP multicast tree associated with the first multicast stream and the default mLDP tunnel.
7. The method of claim 6, wherein causing the packets of the second multicast stream to be forwarded through the default mLDP tunnel includes adding in the forwarding table a correspondence between a first IP multicast tree associated with the second multicast stream and the default mLDP tunnel.
8. A network device for enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network, the network device comprising:
one or more processors; and
non-transitory computer readable storage media storing instructions, which when
executed by the one or more processors causes the network device to:
receive (502) a first internet protocol (IP) multicast event message from a first network device of a first VPN instance, wherein the first IP multicast event message includes an identifier of the first VPN instance and an identification of a first multicast stream for which a first receiver of the first VPN instance requests to receive traffic;
cause (504) generation of a default multicast label distribution protocol (mLDP) tunnel for forwarding traffic of the first multicast stream from a first source to the first receiver of the first multicast stream through an MPLS network, wherein the MPLS network couples a first site of the first VPN instance including the first source and a second site of the first VPN instance including the first receiver;
receive (508) a second IP multicast event message from a second network device of the first VPN instance, wherein the second IP multicast event message includes the identifier of the first VPN instance and an identification of a second multicast stream for which a second receiver of the first VPN instance requests to receive traffic;
determine (510) whether the second multicast stream and the first multicast stream include traffic for the first VPN instance; responsive to determining that the second multicast stream and the first multicast stream include traffic within the first VPN instance, cause (512) packets of the second multicast stream to be forwarded through the default mLDP tunnel; and
receive (506), over the default mLDP tunnel, packets of the first and the second multicast stream to be forwarded towards the first and the second receiver respectively.
9. The network device of claim 8, wherein to cause the generation includes to transmit an in-band signaling Label Switched Protocol (LDP) Forwarding Equivalent Class (FEC) message.
10. The network device of claim 9, wherein the LDP FEC message includes an opaque value field including the identifier of the first VPN instance, and a wildcard as a value of a group address and a source address.
11. The network device of claim 10, wherein the identifier of the first VPN instance is a route distinguisher.
12. The network device of claim 10, wherein the identifier of the first VPN instance is a VPN-ID.
13. The network device of claim 8, wherein to cause the generation includes to add in a forwarding table a correspondence between a first IP multicast tree associated with the first multicast stream and the default mLDP tunnel.
14. The network device of claim 13, wherein to cause the packets of the second multicast stream to be forwarded through the default mLDP tunnel includes to add in the forwarding table a correspondence between a first IP multicast tree associated with the second multicast stream and the default mLDP tunnel.
15. A non-transitory computer readable storage medium storing instructions, which when executed by a processor of a network device causes the network device to perform operations comprising:
receiving (502) a first Internet Protocol (IP) multicast event message from a first network device of a first VPN instance, wherein the first IP multicast event message includes an identifier of the first VPN instance and an identification of a first multicast stream for which a first receiver of the first VPN instance requests to receive traffic; causing (504) generation of a default multicast label distribution protocol (mLDP) tunnel for forwarding traffic of the first multicast stream from a first source to the first receiver of the first multicast stream through an MPLS network, wherein the MPLS network couples a first site of the first VPN instance including the first source and a second site of the first VPN instance including the first receiver; receiving (508) a second IP multicast event message from a second network device of the first VPN instance, wherein the second IP multicast event message includes the identifier of the first VPN instance and an identification of a second multicast stream for which a second receiver of the first VPN instance requests to receive traffic;
determining (510) whether the second multicast stream and the first multicast stream include traffic for the first VPN instance;
responsive to determining that the second multicast stream and the first multicast stream include traffic within the first VPN instance, causing (512) packets of the second multicast stream to be forwarded through the default mLDP tunnel; and receiving (506), over the default mLDP tunnel, packets of the first and the second
multicast stream to be forwarded towards the first and the second receiver respectively.
16. The non-transitory computer readable storage medium of claim 15, wherein the causing the generation includes transmitting an in-band signaling label distribution protocol (LDP) forwarding equivalent class (FEC) message.
17. The non-transitory computer readable storage medium of claim 16, wherein the LDP FEC message includes an opaque value field including the identifier of the first VPN instance, and a wildcard as a value of a group address and a source address.
18. The non-transitory computer readable storage medium of claim 17, wherein the identifier of the first VPN instance is a route distinguisher.
19. The non-transitory computer readable storage medium of claim 17, wherein the identifier of the first VPN instance is a VPN-ID.
20. The non-transitory computer readable storage medium of claim 15, wherein the causing the generation includes adding in a forwarding table a correspondence between a first IP multicast tree associated with the first multicast stream and the default mLDP tunnel.
21. The non-transitory computer readable storage medium of claim 20, wherein the causing the packets of the second multicast stream to be forwarded through the default mLDP tunnel includes adding in the forwarding table a correspondence between a first IP multicast tree associated with the second multicast stream and the default mLDP tunnel.
22. A method of enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network, the method comprising:
monitoring (802) a plurality of multicast streams transmitted over a default multicast label distribution protocol (mLDP) tunnel, wherein the default mLDP tunnel is used to forward the plurality of multicast streams from sources to receivers of a VPN instance through an MPLS network;
responsive (804) to determining that a first multicast stream from the plurality of
multicast streams forwarded over the default mLDP tunnel does not satisfy a forwarding policy, performing the following:
causing (808) generation of a dedicated mLDP tunnel for forwarding packets of the first multicast stream;
forwarding (810) packets of the first multicast stream through the
dedicated mLDP tunnel; and
forwarding (812) packets of a subset of the plurality of multicast streams through the default mLDP tunnel, wherein the subset of the plurality of multicast streams does not include the first multicast stream.
23. The method of claim 22, wherein to satisfy a forwarding policy includes having a transmission rate below a predetermined transmission threshold.
24. The method of claim 22, wherein causing the generation of a dedicated mLDP tunnel includes transmitting over the default mLDP tunnel a message including a request to generate the dedicated mLDP tunnel and an identification of the multicast stream for which the dedicated mLDP tunnel is to be generated.
25. The method of claim 24, wherein the message causes initiation of a protocol independent multicast (PIM) mechanism for generating the dedicated mLDP tunnel for forwarding traffic of the first multicast stream.
26. A network device for enabling a multicast virtual private network (VPN) service across a multiprotocol label switching (MPLS) network, the network device comprising:
one or more processors; and
a non-transitory computer readable storage medium that stores instructions, which when executed by the one or more processors cause the network device to: monitor (502) a plurality of multicast streams transmitted from sources to
receivers of a VPN instance over a default multicast label distribution protocol (mLDP) tunnel of an MPLS network,
responsive to determining (804) that a first multicast stream from the plurality of multicast streams forwarded over the default mLDP tunnel does not satisfy a forwarding policy, perform the following:
cause (808) generation of a dedicated mLDP tunnel for forwarding
packets of the first multicast stream,
forward (510) packets of the first multicast stream through the dedicated mLDP tunnel; and
forward (812) packets of a subset of the plurality of multicast streams through the default mLDP tunnel, wherein the subset of the plurality of multicast streams does not include the first multicast stream.
27. The network device of claim 26, wherein to satisfy a forwarding policy includes to have a transmission rate below a predetermined transmission threshold.
28. The network device of claim 26, wherein to cause the generation of a dedicated mLDP tunnel includes to transmit over the default mLDP tunnel a message including a request to generate the dedicated mLDP tunnel and an identification of the multicast stream for which the dedicated mLDP tunnel is to be generated.
29. The network device of claim 28, wherein the message causes initiation of a protocol independent multicast (PIM) mechanism for generating the dedicated mLDP tunnel for forwarding traffic of the first multicast stream.
30. A non-transitory computer readable storage medium storing instructions, which when executed by a processor of a network device cause the network device to perform operation comprising: monitoring (802) a plurality of multicast streams transmitted over a default multicast label distribution protocol (mLDP) tunnel, wherein the default mLDP tunnel is used to forward the plurality of multicast streams from sources to receivers of a VPN instance through an MPLS network;
responsive (804) to determining that a first multicast stream from the plurality of
multicast streams forwarded over the default mLDP tunnel does not satisfy a forwarding policy, performing the following:
causing (808) generation of a dedicated mLDP tunnel for forwarding packets of the first multicast stream;
forwarding (810) packets of the first multicast stream through the dedicated mLDP tunnel; and
forwarding (812) packets of a subset of the plurality of multicast streams through the default mLDP tunnel, wherein the subset of the plurality of multicast streams does not include the first multicast stream.
31. The non-transitory computer readable storage medium of claim 30, wherein to satisfy a forwarding policy includes having a transmission rate below a predetermined transmission threshold.
32. The non-transitory computer readable storage medium of claim 30, wherein causing the generation of a dedicated mLDP tunnel includes transmitting over the default mLDP tunnel a message including a request to generate the dedicated mLDP tunnel and an identification of the multicast stream for which the dedicated mLDP tunnel is to be generated.
33. The non-transitory computer readable storage medium of claim 32, wherein the message causes initiation of a protocol independent multicast (PIM) mechanism for generating the dedicated mLDP tunnel for forwarding traffic of the first multicast stream.
PCT/IB2017/052188 2017-04-17 2017-04-17 Method and apparatus for enabling a scalable multicast virtual private network service across a multicast label distribution protocol network using in-band signaling WO2018193285A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2017/052188 WO2018193285A1 (en) 2017-04-17 2017-04-17 Method and apparatus for enabling a scalable multicast virtual private network service across a multicast label distribution protocol network using in-band signaling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2017/052188 WO2018193285A1 (en) 2017-04-17 2017-04-17 Method and apparatus for enabling a scalable multicast virtual private network service across a multicast label distribution protocol network using in-band signaling

Publications (1)

Publication Number Publication Date
WO2018193285A1 true WO2018193285A1 (en) 2018-10-25

Family

ID=58638907

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2017/052188 WO2018193285A1 (en) 2017-04-17 2017-04-17 Method and apparatus for enabling a scalable multicast virtual private network service across a multicast label distribution protocol network using in-band signaling

Country Status (1)

Country Link
WO (1) WO2018193285A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112583710A (en) * 2019-09-30 2021-03-30 瞻博网络公司 Assisted replication in software defined networks
EP3883182A1 (en) * 2020-03-20 2021-09-22 Juniper Networks, Inc. Evpn multicast ingress forwarder election using source-active route
WO2022100554A1 (en) * 2020-07-21 2022-05-19 华为技术有限公司 Method for forwarding bier message, and device and system
WO2023076234A1 (en) * 2021-10-25 2023-05-04 Cisco Technology, Inc. Constraint-based underlay tree allocation for data centers
CN116471648A (en) * 2022-01-12 2023-07-21 慧与发展有限责任合伙企业 Multicast WAN optimization in large-scale branch deployments using a central cloud-based service
WO2025081724A1 (en) * 2023-10-16 2025-04-24 中兴通讯股份有限公司 Multicast load splitting method for label network, network device, and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6587467B1 (en) * 1999-11-03 2003-07-01 3Com Corporation Virtual channel multicast utilizing virtual path tunneling in asynchronous mode transfer networks
US20060056427A1 (en) * 2004-08-31 2006-03-16 Matsushita Electric Industrial Co., Ltd. Multicast communication method and gateway apparatus
US7830787B1 (en) * 2001-09-25 2010-11-09 Cisco Technology, Inc. Flooding control for multicast distribution tunnel
US20110255536A1 (en) * 2008-12-31 2011-10-20 Huawei Technologies Co., Ltd. Method, system, and apparatus for extranet networking of multicast virtual private network
US20120057594A1 (en) * 2006-05-25 2012-03-08 Cisco Technology, Inc. Techniques for Reliable Switchover to a Date Multicast Distribution Tree (MDT)
US20130010649A1 (en) * 2004-12-21 2013-01-10 At&T Corp. Method and apparatus for scalable virtual private network multicasting
CN106230730A (en) * 2016-07-28 2016-12-14 杭州华三通信技术有限公司 A kind of multicast transmission method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6587467B1 (en) * 1999-11-03 2003-07-01 3Com Corporation Virtual channel multicast utilizing virtual path tunneling in asynchronous mode transfer networks
US7830787B1 (en) * 2001-09-25 2010-11-09 Cisco Technology, Inc. Flooding control for multicast distribution tunnel
US20060056427A1 (en) * 2004-08-31 2006-03-16 Matsushita Electric Industrial Co., Ltd. Multicast communication method and gateway apparatus
US20130010649A1 (en) * 2004-12-21 2013-01-10 At&T Corp. Method and apparatus for scalable virtual private network multicasting
US20120057594A1 (en) * 2006-05-25 2012-03-08 Cisco Technology, Inc. Techniques for Reliable Switchover to a Date Multicast Distribution Tree (MDT)
US20110255536A1 (en) * 2008-12-31 2011-10-20 Huawei Technologies Co., Ltd. Method, system, and apparatus for extranet networking of multicast virtual private network
CN106230730A (en) * 2016-07-28 2016-12-14 杭州华三通信技术有限公司 A kind of multicast transmission method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
H3C TECHNOLOGIES CO ET AL: "Multicast VPN Technology White Paper", INTERNET CITATION, 22 July 2008 (2008-07-22), pages 1 - 26, XP002688313, Retrieved from the Internet <URL:http://www.h3c.com/portal/download.do?id=648599> [retrieved on 20121130] *
IJ WIJNANDS ET AL: "Internet Engineering Task Force (IETF) Label Distribution Protocol Extensions for Point-to-Multipoint and Multipoint-to-Multipoint Label Switched Paths RFC 6388 -Label Distribution Protocol Extensions for Point-to-Multipoint and Multi", 1 November 2011 (2011-11-01), XP055364179, Retrieved from the Internet <URL:https://www.ietf.org/> [retrieved on 20170616] *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112583710A (en) * 2019-09-30 2021-03-30 瞻博网络公司 Assisted replication in software defined networks
CN112583710B (en) * 2019-09-30 2023-04-07 瞻博网络公司 Assisted replication in software defined networks
US11665088B2 (en) 2019-09-30 2023-05-30 Juniper Networks, Inc. Assisted replication in software defined network
EP3883182A1 (en) * 2020-03-20 2021-09-22 Juniper Networks, Inc. Evpn multicast ingress forwarder election using source-active route
CN113497766A (en) * 2020-03-20 2021-10-12 瞻博网络公司 EVPN multicast ingress forwarder selection using source activated routing
US11496329B2 (en) 2020-03-20 2022-11-08 Juniper Networks, Inc. EVPN multicast ingress forwarder election using source-active route
WO2022100554A1 (en) * 2020-07-21 2022-05-19 华为技术有限公司 Method for forwarding bier message, and device and system
WO2023076234A1 (en) * 2021-10-25 2023-05-04 Cisco Technology, Inc. Constraint-based underlay tree allocation for data centers
CN116471648A (en) * 2022-01-12 2023-07-21 慧与发展有限责任合伙企业 Multicast WAN optimization in large-scale branch deployments using a central cloud-based service
WO2025081724A1 (en) * 2023-10-16 2025-04-24 中兴通讯股份有限公司 Multicast load splitting method for label network, network device, and readable storage medium

Similar Documents

Publication Publication Date Title
US11438254B2 (en) Apparatus and method to trace packets in a packet processing pipeline of a software defined networking switch
US11444864B2 (en) Optimized datapath troubleshooting with trace policy engine
US11115328B2 (en) Efficient troubleshooting in openflow switches
US9923781B2 (en) Designated forwarder (DF) election and re-election on provider edge (PE) failure in all-active redundancy topology
US10523456B2 (en) Multipoint to multipoint trees for computed spring multicast
US11968082B2 (en) Robust node failure detection mechanism for SDN controller cluster
US11463399B2 (en) Efficient network address translation (NAT) in cloud networks
US9521458B2 (en) IPTV targeted messages
CN108055878A (en) Using Border Gateway Protocol maximum segment identifier depth is disclosed to applications
EP3479553B1 (en) Efficient nat in sdn network
WO2017009755A1 (en) Mtu discovery over multicast path using bit indexed explicit replication
WO2018220638A1 (en) Optimizing service node monitoring in sdn
US10291957B2 (en) Quicker IPTV channel with static group on IGMP loopback interface
WO2018193285A1 (en) Method and apparatus for enabling a scalable multicast virtual private network service across a multicast label distribution protocol network using in-band signaling
US12021656B2 (en) Method and system to transmit broadcast, unknown unicast, or multicast (BUM) traffic for multiple ethernet virtual private network (EVPN) instances (EVIs)
WO2018042230A1 (en) Configurable selective packet-in mechanism for openflow switches
US12113705B2 (en) Controller watch port for robust software defined networking (SDN) system operation
US10944582B2 (en) Method and apparatus for enhancing multicast group membership protocol(s)
US11218406B2 (en) Optimized datapath troubleshooting
WO2018158615A1 (en) Method and apparatus for enabling the creation of a point-to-multipoint label switched path multicast distribution tree for a given ip multicast stream

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17719925

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17719925

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载