+

WO2018106604A1 - Systèmes, procédés et dispositifs de rapport d'utilisation de processeur virtuel de fonction de réseau virtuel dans des réseaux cellulaires - Google Patents

Systèmes, procédés et dispositifs de rapport d'utilisation de processeur virtuel de fonction de réseau virtuel dans des réseaux cellulaires Download PDF

Info

Publication number
WO2018106604A1
WO2018106604A1 PCT/US2017/064538 US2017064538W WO2018106604A1 WO 2018106604 A1 WO2018106604 A1 WO 2018106604A1 US 2017064538 W US2017064538 W US 2017064538W WO 2018106604 A1 WO2018106604 A1 WO 2018106604A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual processor
processor usage
virtual
measurement
usage
Prior art date
Application number
PCT/US2017/064538
Other languages
English (en)
Inventor
Yizhi Yao
Joey Chou
Original Assignee
Intel IP Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel IP Corporation filed Critical Intel IP Corporation
Publication of WO2018106604A1 publication Critical patent/WO2018106604A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3452Performance evaluation by statistical analysis

Definitions

  • the present disclosure relates to cellular networks and more specifically to virtual processor usage reporting for virtual network functions.
  • Wireless mobile communication technology uses various standards and protocols to transmit data between a base station and a wireless mobile device.
  • Wireless communication system standards and protocols can include the 3rd Generation Partnership Project (3GPP) long term evolution (LTE); the Institute of Electrical and Electronics Engineers (IEEE) 802.16 standard, which is commonly known to industry groups as worldwide interoperability for microwave access (WiMAX); and the IEEE 802.11 standard for wireless local area networks (WLAN), which is commonly known to industry groups as Wi-Fi.
  • 3GPP 3rd Generation Partnership Project
  • LTE long term evolution
  • IEEE 802.16 which is commonly known to industry groups as worldwide interoperability for microwave access
  • Wi-Fi wireless local area networks
  • the base station can include a RAN Node such as a Evolved Universal Terrestrial Radio Access Network (E-UTRAN) Node B (also commonly denoted as evolved Node B, enhanced Node B, eNodeB, or eNB) and/or Radio Network Controller (RNC) in an E-UTRAN, which communicate with a wireless communication device, known as user equipment (UE).
  • E-UTRAN Evolved Universal Terrestrial Radio Access Network
  • Nodes can include a 5G Node, new radio (NR) node or g Node B (gNB).
  • NR new radio
  • gNB g Node B
  • RANs use a radio access technology (RAT) to communicate between the RAN Node and UE.
  • RANs can include global system for mobile communications (GSM), enhanced data rates for GSM evolution (EDGE) RAN (GERAN), Universal Terrestrial Radio Access Network (UTRAN), and/or E-UTRAN, which provide access to communication services through a core network.
  • GSM global system for mobile communications
  • EDGE enhanced data rates for GSM evolution
  • GERAN enhanced data rates for GSM evolution
  • UTRAN Universal Terrestrial Radio Access Network
  • E-UTRAN which provide access to communication services through a core network.
  • Each of the RANs operates according to a specific 3GPP RAT.
  • the GERAN implements GSM and/or EDGE RAT
  • the UTRAN implements universal mobile telecommunication system (UMTS) RAT or other 3GPP RAT
  • E- UTRAN implements LTE RAT.
  • UMTS universal mobile telecommunication system
  • a core network can be connected to the UE through the RAN Node.
  • the core network can include a serving gateway (SGW), a packet data network (PDN) gateway (PGW), an access network detection and selection function (ANDSF) server, an enhanced packet data gateway (ePDG) and/or a mobility management entity (MME).
  • SGW serving gateway
  • PGW packet data network gateway
  • ANDSF access network detection and selection function
  • ePDG enhanced packet data gateway
  • MME mobility management entity
  • FIG. 1 is a diagram illustrating a network management architecture for virtualized network functions (VNFs or network function virtualization (NFV) more generally) consistent with embodiments disclosed herein.
  • VNFs virtualized network functions
  • NFV network function virtualization
  • FIG. 2 is a sequence diagram illustrating a reporting sequence for virtual processor usage consistent with embodiments disclosed herein.
  • FIG. 3 is a bar chart illustrating distribution of a virtual processor usage consistent with embodiments disclosed herein.
  • FIG. 4 is a flow chart illustrating a method for managing of virtualized network functions consistent with embodiments disclosed herein.
  • FIG. 5 illustrates an architecture of a system of a network in accordance with some embodiments.
  • FIG. 6 illustrates example components of a device in accordance with some embodiments.
  • FIG. 7 illustrates example interfaces of baseband circuitry in accordance with some embodiments.
  • FIG. 8 is an illustration of a control plane protocol stack in accordance with some embodiments.
  • FIG. 9 is an illustration of a user plane protocol stack in accordance with some embodiments.
  • FIG. 10 illustrates components of a core network in accordance with some embodiments.
  • FIG. 11 is a block diagram illustrating components able to read instructions from a machine-readable or computer-readable medium and perform any one or more of the methodologies discussed herein.
  • VNFM virtualized network function manager
  • an overloaded Virtualized Compute Resource affects the performance of a VNF/VNFC instance, while a underused Virtualized Compute Resource of a VNF/VNFC instance causes a waste of a resource that cannot be utilized by other
  • VNF/VNFC instances As the usage of Virtualized Compute Resources for a given
  • VNF/VNFC instances can fluctuate dynamically, the VNFM can monitor Virtualized Compute Resource usage. If the Virtualized Compute Resource usage has been saturated for a certain period, the VNFM can scale out or scale in the VNF/VNFC instance to allocate more Virtualized Compute Resources. If the Virtualized Compute Resource usage has been underused for a certain period, the VNFM can scale in or scale down the VNF/VNFC instance to release some Virtualized Compute Resources.
  • the usage of an individual virtual processor also known as a virtual CPU
  • the consolidated usage of all virtual processors of the Virtualized Compute Resource needs to be monitored by VNFM for the above purpose.
  • virtualized can be spelled as virtualised, however the meaning is the same.
  • FIG. 1 is a diagram illustrating a network management architecture 100 for virtualized network functions (VNFs or network function virtualization (NFV) more generally). These components, according to some example embodiments, can support NFV.
  • VNFs virtualized network functions
  • NFV network function virtualization
  • the system 100 is illustrated as including a virtualized infrastructure manager (VIM) 102, a network function virtualization infrastructure (NFVI) 104, a VNF manager (VNFM) 106, virtualized network functions (VNFs) 108, an element manager (EM) 110, an NFV Orchestrator (NFVO) 112, and a network manager (NM) 114 within an Operation Support System/Business Support System (OSS/BSS) 122.
  • VIP virtualized infrastructure manager
  • NFVI network function virtualization infrastructure
  • VNFM VNF manager
  • VNFs virtualized network functions
  • EM element manager
  • NFVO NFV Orchestrator
  • NM network manager
  • OSS/BSS Operation Support System/Business Support System
  • the VIM 102 manages the resources of the NFVI 104.
  • the NFVI 104 can include physical or virtual resources and applications (including hypervisors) used to execute the system 100.
  • the VIM 102 may manage the life cycle of virtual resources with the NFVI 104 (e.g., creation, maintenance, and tear down of virtual machines (VMs) associated with one or more physical resources), track VM instances, track performance, fault and security of VM instances and associated physical resources, and expose VM instances and associated physical resources to other management systems.
  • VMs virtual machines
  • the VNFM 106 may manage the VNFs 108.
  • the VNFs 108 may be used to execute IMS, EPC and 5G (5GC and NG-RAN) components/functions.
  • the VNFM 106 may manage the life cycle of the VNFs 108 and track performance, fault and security of the virtual aspects of VNFs 108.
  • the EM 110 may track the performance, fault and security of the functional aspects of VNFs 108 and physical network functions (PNFs) 124.
  • the tracking data from the VNFM 106 and the EM 110 may comprise, for example, performance measurement (PM) data used by the VIM 102 or the NFVI 104.
  • PM performance measurement
  • Both the VNFM 106 and the EM 110 can scale up/down the quantity of VNFs 108 of the system 100.
  • the EM 110 is responsible for fault, configuration, accounting, performance and security management (FCAPS).
  • the EM 1 10 can manage multiple VNFs 108 or multiple EMs 110 manage a single VNF 108 each.
  • the EM 110 can be a VNF 108 itself.
  • the combination of the NM 114, a domain manager (DM) 126 and/or the EM 110 is considered to be a third generation partnership project (3GPP) management system.
  • 3GPP third generation partnership project
  • the NFVO 112 may coordinate, authorize, release and engage resources of the NFVI 104 in order to provide the requested network service (e.g., which may be used to execute an EPC function, component, or slice).
  • the NM 114 may provide a package of end-user functions with the responsibility for the management of a network, which may include network elements with VNFs 108, non-virtualized network functions, or both (management of the VNFs 108 may occur via the EM 110).
  • the OSS portion of the OSS/BSS 122 is responsible for network management, fault management, configuration management and service management.
  • the BSS portion of the OSS/BSS 122 is responsible for customer management, product management and order management.
  • the current BSS/OSS 122 of an operator may be interworking with an NFV Management and Orchestration (NFV-MANO) 132 using standard interfaces (or reference points).
  • NFV-MANO NFV Management and Orchestration
  • Interconnection points (or reference points) between functional blocks can expose an eternal view of a functional block. These can include OS-Ma-nfvo between the NM 114 and NFVO 112; Ve-VNFM-em between the EM 110 and the VNFM 106; Ve-Vnfm-vnf between a VNF 108 and VNFM 106; Or-Vnfm between the NFVO 112 and the VNFM 106; Or-Vi between the NFVO 112 and the VIM 102; Vi-Vnfm between the VNFM 106 and VFM 102; NF-Vi between the NFVI 104 and the VIM 102; VN-Nf between the NFVI 104 and VNF 108; and Itf-N between the EM 110 or DM 126 and NM 114.
  • a Virtualized Resource Performance Management Interface has been defined for reference point Vi-Vnfm between VFM 102 and VNFM 106 as shown in FIG. 1.
  • the operations to create performance measurement (PM, sometimes called performance metrics) job and notify the availability of PM data can be transmitted using the above-mentioned interface.
  • the usage of an individual virtual CPU (sometimes called a virtual processor or vCPU) is a part of a Virtualized Resource (VR), or the consolidated usage of all virtual CPUs of a Virtualized Compute Resource and can be monitored by a performance measurement.
  • FIG. 2 is a sequence diagram 200 illustrating a reporting sequence for virtual processor usage.
  • An NFVI 206 samples a virtual processor (or virtual CPU) or all virtual processors of a Virtualized Compute Resource and provides metrics to a VIM 204.
  • the VIM 204 processes the metrics and prepares a report of virtual processor usage to send to a VNFM 202.
  • a virtual CPU usage for reference point Vi-Vnfm is collected by the following approach.
  • the VIM 204 receives 212 one or more vCPU usage metrics from the NFVI 206, who samples the vCPU usage at a pre-defined internal.
  • the VIM 204 processes 208 the received vCPU usage metrics within the collection period. An example of the details of the processing is depicted below in the specific vCPU usage measurement definitions.
  • the VIM 204 generates 210 the vCPU usage measurement for reference point Vi-Vnfm.
  • the VIM 204 reports 214 to the VNFM 202 about the available vCPU usage measurement.
  • the performance measurement template including the collection method can be defined by a collection method, trigger, measurement unit, measurement name, measurement group, measured object type and/or usage.
  • a collection method can include a cumulative counter (CC): A Measuring Entity (ME) that takes the measurement, resets the counter to "0" at the beginning of each collection period, and increments the counter for each event being detected.
  • a measurement is generated from the counter at the end of the collection period.
  • a collection method can also include status monitoring (SM). The ME samples the counter at a predetermined interval.
  • a measurement is generated from processing (e.g., arithmetic mean, peak) all of the samples obtained in the collection period.
  • a trigger can cause the counter to be updated.
  • a measurement unit can define the unit of the measurement value.
  • a measurement name can define the name of a performance metric.
  • a measurement can contain multiple subcounters (e.g., counts associated with failure causes).
  • a measurement group can define a performance metric group to which a measurement belongs.
  • a measured object type can describe the object type of a measurement. Usage can be optional and describes the usage of the measurement. It may provide additional information on the interest (e.g., used in the VNFD) of the measurement.
  • Performance measurements can measure the performance of virtualized resources.
  • the specific virtualized resource where the measurement is to be collected is identified by object type and object instance identifier that may be provided by the performance measurement job creation.
  • Each performance measurement report should contain a time stamp to indicate when the measurement is collected.
  • the usage of an individual virtual CPU is a part of the Virtualized Compute Resource, or the consolidated usage of all virtual processors of the Virtualized Compute Resource needs to be monitored by VNFM.
  • the monitoring can include mean virtual CPU usage, peak virtual CPU usage, distribution of virtual CPU usage, etc.
  • Mean virtual CPU usage is a measurement that provides the mean usage of an individual virtual CPU or consolidated usage of all virtual CPUs of a of Virtualized Compute Resource. This can be accomplished through status monitoring (SM) as described above.
  • VIM receives the virtual CPU usage metric from FVI that samples the virtual CPU usage (of an individual virtual CPU or consolidated for all or virtual CPUs of a virtualized compute resource) at the pre-defined interval, and then takes the arithmetic mean of the virtual CPU usage metrics received in the collection period.
  • the VIM may receive the mean virtual CPU usage metric directly from NFVI, if NFVI can take the arithmetic mean of the virtual CPU usage metrics that it collects in the collection period.
  • the lists include options for creating the measurement when multiple virtual CPU usage metrics are received, including: using subcounters to report mean usage measurements of multiple virtual CPUs; reporting a measurement with the highest mean usage among multiple virtual CPUs; or performing processing of the usage measurements of multiple virtual CPUs and then generating a consolidated measurement.
  • Each measurement is a real value indicating the percentage of the virtual CPU resources that are used.
  • the usage can be labeled as VcpuUsageMean or VcpuUsageMean.
  • Vcpuld where Vcpuld identifies a virtual CPU.
  • the Vcpuld can be used in cases where there are multiple virtual CPUs in the measured Virtualized Compute Resource and each virtual CPU is measured by a subcounter.
  • a virtual resource can be identified using a Virtualized Compute Resource identifier. The identifier can identify the Virtualized Compute Resource (whether it is identified by
  • Computeld or instanceld The mean virtual CPU usage can be also requested in VNFD.
  • Peak virtual CPU usage is a measurement that provides the peak usage of an individual virtual CPU or consolidated usage of all virtual CPUs of a Virtualized Compute Resource. This can be accomplished through status monitoring (SM) as described above.
  • VIM receives the virtual CPU usage metric from FVI that samples the virtual CPU usage at the pre-defined interval, and then selects the maximum metric among the virtual CPU usage metrics received in the collection period.
  • the VIM may receive the peak virtual CPU usage metric directly from NFVI, if NFVI can select the maximum metric among of the virtual CPU usage metrics that it collects in the collection period.
  • VIM virtualized Compute Resource
  • subcounters to report maximum usage measurements of multiple virtual CPUs
  • reporting a measurement with the highest maximum usage among multiple virtual CPUs or performing processing of the usage measurements of multiple virtual CPUs, and then generating a consolidated measurement.
  • Each measurement is a real value indicating the percentage of the virtual CPU resources that are used.
  • the usage can be labeled as VcpuUsagePeak or VcpuUsagePeak.
  • Vcpuld where Vcpuld identifies a virtual CPU of a Virtualized Compute Resource, in case there are multiple virtual CPUs in the measured Virtualized Compute Resource and each is measured by a subcounter.
  • a virtual resource can be identified using a Virtualized Compute Resource identifier. The identifier can identify the Virtualized Compute Resource (whether it is identified by
  • FIG. 3 is a bar chart 300 illustrating distribution of a virtual CPU usage (of an individual virtual CPU or consolidated for all of the CPUs of a virtualized compute resource). This measurement provides the distribution of the virtual CPU usage for a Virtualized Compute Resource. The distribution is indicated by the ratio of the usage samples that fell into each given usage range (e.g., 60% - 70%, or 70% - 80%) to the total usage samples of the virtual CPU.
  • the example of distribution of virtual CPU usage is illustrated by the following diagram.
  • VIM receives the virtual CPU usage metric from NFVI that samples the virtual CPU usage at a pre-defined interval, and performs processing. For example, the received virtual CPU usage metric already indicates the distribution of the virtual CPU usage but multiple metrics (reports) for this virtual CPU are received from NFVI in the collection period, VDVI takes the arithmetic mean of all of the CPU usage metrics received in the collection period. The received virtual CPU usage metrics are not the distribution but samples of the virtual CPU usage; VDVI generates the distribution of virtual CPU usage for each given usage level based on the received samples.
  • the following lists options for generating the measurement include: using subcounters to report mean usage measurements of multiple virtual CPUs; reporting a measurement for a selected virtual CPU (how to select the virtual CPU is out of standards); or performing processing of the usage measurements of multiple virtual CPUs and then generating a consolidated measurement.
  • Each measurement can be an integer indicating the percentage of virtual CPU usage samples that fell into the corresponding usage level.
  • the usage can be labeled as VcpuUsage. Range or VcpuUsage. VcpuId.Range, where Vcpuld identifies a virtual CPU, in case there are multiple virtual CPUs in the measured Virtualized Compute Resource and each virtual CPU is measured by a subcounter.
  • Range can indicate the configurable range of the virtual CPU usage, e.g., 70% ⁇ the virtual CPU usage ⁇ 80%.
  • a virtual resource can be identified using a Virtualized Compute Resource identifier. The identifier can identify the Virtualized Compute Resource (whether it is identified by or by instanceld ). The
  • FIG. 4 is a flow chart illustrating a method 400 for managing of virtualized network functions consistent with embodiments disclosed herein.
  • the method 400 can be
  • the VFM can receive a set of virtual processor usage metrics from NFVI, the set of virtual processor usage metrics covering at least a collection period.
  • the VIM can generate a virtual processor usage measurement for a virtualized compute resource based at least in part on the virtual CPU usage metrics.
  • the VIM can report the virtual processor usage measurement to the VNFM.
  • FIG. 5 illustrates an architecture of a system 500 of a network in accordance with some embodiments.
  • the system 500 is shown to include a user equipment (UE) 501 and a UE 502.
  • the UEs 501 and 502 are illustrated as smartphones (e.g., handheld touchscreen mobile computing devices connectable to one or more cellular networks), but may also comprise any mobile or non-mobile computing device, such as Personal Data Assistants (PDAs), pagers, laptop computers, desktop computers, wireless handsets, or any computing device including a wireless communications interface.
  • PDAs Personal Data Assistants
  • pagers pagers
  • laptop computers desktop computers
  • wireless handsets or any computing device including a wireless communications interface.
  • any of the UEs 501 and 502 can comprise an Internet of Things (IoT) UE, which can comprise a network access layer designed for low-power IoT applications utilizing short-lived UE connections.
  • An IoT UE can utilize technologies such as machine-to-machine (M2M) or machine-type communications (MTC) for exchanging data with an MTC server or device via a public land mobile network (PLMN), Proximity -Based Service (ProSe) or device-to-device (D2D) communication, sensor networks, or IoT networks.
  • M2M or MTC exchange of data may be a machine-initiated exchange of data.
  • An IoT network describes interconnecting IoT UEs, which may include uniquely identifiable embedded computing devices (within the Internet infrastructure), with short-lived
  • the IoT UEs may execute background applications (e.g., keep-alive messages, status updates, etc.) to facilitate the connections of the IoT network.
  • background applications e.g., keep-alive messages, status updates, etc.
  • the UEs 501 and 502 may be configured to connect, e.g., communicatively couple, with a radio access network (RAN) 510.
  • the RAN 510 may be, for example, an Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (E-UTRAN), a NextGen RAN (NG RAN), or some other type of RAN.
  • UMTS Evolved Universal Mobile Telecommunications System
  • E-UTRAN Evolved Universal Mobile Telecommunications System
  • NG RAN NextGen RAN
  • the UEs 501 and 502 utilize connections 503 and 504, respectively, each of which comprises a physical communications interface or layer (discussed in further detail below); in this example, the connections 503 and 504 are illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols, such as a Global System for Mobile Communications (GSM) protocol, a code-division multiple access (CDMA) network protocol, a Push-to-Talk (PTT) protocol, a PTT over Cellular (POC) protocol, a Universal Mobile Telecommunications System (UMTS) protocol, a 3 GPP Long Term Evolution (LTE) protocol, a fifth generation (5G) protocol, a New Radio (NR) protocol, and the like.
  • GSM Global System for Mobile Communications
  • CDMA code-division multiple access
  • PTT Push-to-Talk
  • POC PTT over Cellular
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • 5G fifth generation
  • NR New Radio
  • the UEs 501 and 502 may further directly exchange
  • the ProSe interface 505 may alternatively be referred to as a sidelink interface comprising one or more logical channels, including but not limited to a Physical Sidelink Control Channel (PSCCH), a Physical Sidelink Shared Channel (PS SCH), a Physical Sidelink Discovery Channel (PSDCH), and a Physical Sidelink
  • PSCCH Physical Sidelink Control Channel
  • PS SCH Physical Sidelink Shared Channel
  • PSDCH Physical Sidelink Discovery Channel
  • PSBCH Broadcast Channel
  • the UE 502 is shown to be configured to access an access point (AP) 506 via connection 507.
  • the connection 507 can comprise a local wireless connection, such as a connection consistent with any IEEE 802.11 protocol, wherein the AP 506 would comprise a wireless fidelity (WiFi®) router.
  • WiFi® wireless fidelity
  • the AP 506 may be connected to the Internet without connecting to the core network of the wireless system (described in further detail below).
  • the RAN 510 can include one or more access nodes that enable the connections 503 and 504. These access nodes (ANs) can be referred to as base stations (BSs), NodeBs, evolved NodeBs (eNBs), next Generation NodeBs (gNB), RAN nodes, and so forth, and can comprise ground stations (e.g., terrestrial access points) or satellite stations providing coverage within a geographic area (e.g., a cell).
  • BSs base stations
  • eNBs evolved NodeBs
  • gNB next Generation NodeBs
  • RAN nodes and so forth, and can comprise ground stations (e.g., terrestrial access points) or satellite stations providing coverage within a geographic area (e.g., a cell).
  • the RAN 510 may include one or more RAN nodes for providing macrocells, e.g., a macro RAN node 511, and one or more RAN nodes for providing femtocells or picocells (e.g., cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells), e.g., a low power (LP) RAN node 512.
  • a macro RAN node 511 e.g., a macro RAN node 511
  • femtocells or picocells e.g., cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells
  • LP low power
  • any of the RAN nodes 511 and 512 can terminate the air interface protocol and can be the first point of contact for the UEs 501 and 502.
  • any of the RAN nodes 511 and 512 can fulfill various logical functions for the RAN 510 including, but not limited to, radio network controller (RNC) functions such as radio bearer management, uplink and downlink dynamic radio resource management and data packet scheduling, and mobility management.
  • RNC radio network controller
  • the UEs 501 and 502 can be configured to communicate using Orthogonal Frequency-Division Multiplexing (OFDM) communication signals with each other or with any of the RAN nodes 511 and 512 over a multicarrier communication channel in accordance with various communication techniques, such as, but not limited to, an Orthogonal Frequency-Division Multiple Access (OFDMA)
  • OFDM Orthogonal Frequency-Division Multiplexing
  • the OFDM signals can comprise a plurality of orthogonal subcarriers.
  • a downlink resource grid can be used for downlink
  • the grid can be a time-frequency grid, called a resource grid or time-frequency resource grid, which is the physical resource in the downlink in each slot.
  • a time-frequency plane representation is a common practice for OFDM systems, which makes it intuitive for radio resource allocation.
  • Each column and each row of the resource grid correspond to one OFDM symbol and one OFDM subcarrier, respectively.
  • the duration of the resource grid in the time domain corresponds to one slot in a radio frame.
  • the smallest time-frequency unit in a resource grid is denoted as a resource element.
  • Each resource grid comprises a number of resource blocks, which describe the mapping of certain physical channels to resource elements.
  • Each resource block comprises a collection of resource elements; in the frequency domain, this may represent the smallest quantity of resources that currently can be allocated. There are several different physical downlink channels that are conveyed using such resource blocks.
  • the physical downlink shared channel may carry user data and higher-layer signaling to the UEs 501 and 502.
  • the physical downlink control channel (PDCCH) may carry information about the transport format and resource allocations related to the PDSCH channel, among other things. It may also inform the UEs 501 and 502 about the transport format, resource allocation, and H-ARQ (Hybrid Automatic Repeat Request) information related to the uplink shared channel.
  • downlink scheduling (assigning control and shared channel resource blocks to the UE 502 within a cell) may be performed at any of the RAN nodes 511 and 512 based on channel quality information fed back from any of the UEs 501 and 502.
  • the downlink resource assignment information may be sent on the PDCCH used for (e.g., assigned to) each of the UEs 501 and 502.
  • the PDCCH may use control channel elements (CCEs) to convey the control information.
  • CCEs control channel elements
  • the PDCCH complex-valued symbols may first be organized into quadruplets, which may then be permuted using a sub- block interleaver for rate matching.
  • Each PDCCH may be transmitted using one or more of these CCEs, where each CCE may correspond to nine sets of four physical resource elements known as resource element groups (REGs).
  • RAGs resource element groups
  • QPSK Quadrature Phase Shift Keying
  • the PDCCH can be transmitted using one or more CCEs, depending on the size of the downlink control information (DCI) and the channel condition.
  • DCI downlink control information
  • There can be four or more different PDCCH formats defined in LTE with different numbers of CCEs (e.g., aggregation level, L l, 2, 4, or 8).
  • Some embodiments may use concepts for resource allocation for control channel information that are an extension of the above-described concepts.
  • some embodiments may utilize an enhanced physical downlink control channel (EPDCCH) that uses PDSCH resources for control information transmission.
  • the EPDCCH may be transmitted using one or more enhanced control channel elements (ECCEs). Similar to above, each ECCE may correspond to nine sets of four physical resource elements known as enhanced resource element groups (EREGs). An ECCE may have other numbers of EREGs in some situations.
  • EPCCH enhanced physical downlink control channel
  • ECCEs enhanced control channel elements
  • each ECCE may correspond to nine sets of four physical resource elements known as enhanced resource element groups (EREGs).
  • EREGs enhanced resource element groups
  • An ECCE may have other numbers of EREGs in some situations.
  • the RAN 510 is shown to be communicatively coupled to a core network (CN) 520— via an SI interface 513.
  • the CN 520 may be an evolved packet core (EPC) network, a NextGen Packet Core (NPC) network, or some other type of CN.
  • EPC evolved packet core
  • NPC NextGen Packet Core
  • the SI interface 513 is split into two parts: an Sl-U interface 514, which carries traffic data between the RAN nodes 511 and 512 and a serving gateway (S-GW) 522, and an SI -mobility management entity (MME) interface 515, which is a signaling interface between the RAN nodes 511 and 512 and MMEs 521.
  • S-GW serving gateway
  • MME SI -mobility management entity
  • the CN 520 comprises the MMEs 521, the S-GW 522, a Packet Data Network (PDN) Gateway (P-GW) 523, and a home subscriber server (HSS) 524.
  • the MMEs 521 may be similar in function to the control plane of legacy Serving General Packet Radio Service (GPRS) Support Nodes (SGSN).
  • GPRS General Packet Radio Service
  • the MMEs 521 may manage mobility aspects in access such as gateway selection and tracking area list management.
  • the HSS 524 may comprise a database for network users, including subscription-related information to support the network entities' handling of communication sessions.
  • the CN 520 may comprise one or several HSSs 524, depending on the number of mobile subscribers, on the capacity of the equipment, on the organization of the network, etc.
  • the HSS 524 can provide support for routing/roaming, authentication, authorization,
  • the S-GW 522 may terminate the SI interface 513 toward the RAN 510, and routes data packets between the RAN 510 and the CN 520.
  • the S-GW 522 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.
  • the P-GW 523 may terminate an SGi interface toward a PDN.
  • the P-GW 523 may route data packets between the CN 520 (e.g., an EPC network) and external networks such as a network including an application server 530 (alternatively referred to as application function (AF)) via an Internet Protocol (IP) interface 525.
  • the application server 530 may be an element offering applications that use IP bearer resources with the CN 520 (e.g., UMTS Packet Services (PS) domain, LTE PS data services, etc.).
  • PS UMTS Packet Services
  • LTE PS data services etc.
  • the P-GW 523 is shown to be communicatively coupled to the application server 530 via the IP communications interface 525.
  • the application server 530 can also be configured to support one or more communication services (e.g., Voice-over-Internet Protocol (VoIP) sessions, PTT sessions, group communication sessions, social networking services, etc.) for the UEs 501 and 502 via the CN 520.
  • VoIP Voice-over-Internet Protocol
  • PTT sessions PTT sessions
  • group communication sessions social networking services, etc.
  • the P-GW 523 may further be a node for policy enforcement and charging data collection.
  • a Policy and Charging Enforcement Function (PCRF) 526 is the policy and charging control element of the CN 520.
  • PCRF Policy and Charging Enforcement Function
  • HPLMN Home Public Land Mobile Network
  • IP-CAN Internet Protocol Connectivity Access Network
  • HPLMN Home Public Land Mobile Network
  • V-PCRF Visited PCRF
  • VPLMN Visited Public Land Mobile Network
  • the PCRF 526 may be communicatively coupled to the application server 530 via the P-GW 523.
  • the application server 530 may signal the PCRF 526 to indicate a new service flow and select the appropriate Quality of Service (QoS) and charging parameters.
  • the PCRF 526 may provision this rule into a Policy and Charging Enforcement Function (PCEF) (not shown) with the appropriate traffic flow template (TFT) and QoS class of identifier (QCI), which commences the QoS and charging as specified by the application server 530.
  • PCEF Policy and Charging Enforcement Function
  • TFT traffic flow template
  • QCI QoS class of identifier
  • FIG. 6 illustrates example components of a device 600 in accordance with some embodiments.
  • the device 600 may include an application circuitry 602, a baseband circuitry 604, a Radio Frequency (RF) circuitry 606, a front-end module (FEM) circuitry 608, one or more antennas 610, and a power management circuitry (PMC) 612 coupled together at least as shown.
  • the components of the illustrated device 600 may be included in a UE or a RAN node.
  • the device 600 may include fewer elements (e.g., a RAN node may not utilize the application circuitry 602, and instead include a processor/controller to process IP data received from an EPC).
  • the device 600 may include additional elements such as, for example, memory/storage, display, camera, sensor, or input/output (I/O) interface.
  • the components described below may be included in more than one device (e.g., said circuitries may be separately included in more than one device for Cloud-RAN (C-RAN) implementations).
  • C-RAN Cloud-RAN
  • the application circuitry 602 may include one or more application processors.
  • the application circuitry 602 may include circuitry such as, but not limited to, one or more single-core or multi-core processors.
  • the processor(s) may include any combination of general-purpose processors and dedicated processors (e.g., graphics processors, application processors, etc.).
  • the processors may be coupled with or may include
  • processors of the application circuitry 602 may process IP data packets received from an EPC.
  • the baseband circuitry 604 may include circuitry such as, but not limited to, one or more single-core or multi-core processors.
  • the baseband circuitry 604 may include one or more baseband processors or control logic to process baseband signals received from a receive signal path of the RF circuitry 606 and to generate baseband signals for a transmit signal path of the RF circuitry 606.
  • the baseband processing circuity 604 may interface with the application circuitry 602 for generation and processing of the baseband signals and for controlling operations of the RF circuitry 606.
  • the baseband circuitry 604 may include a third generation (3G) baseband processor 604 A, a fourth generation (4G) baseband processor 604B, a fifth generation (5G) baseband processor 604C, or other baseband processor(s) 604D for other existing generations, generations in development or to be developed in the future (e.g., second generation (2G), sixth generation (6G), etc.).
  • the baseband circuitry 604 e.g., one or more of the baseband processors 604A- D
  • the functionality of the baseband processors 604A-D may be included in modules stored in a memory 604G and executed via a central processing unit (CPU) 604E.
  • the radio control functions may include, but are not limited to, signal modulation/demodulation,
  • modulation/demodulation circuitry of the baseband circuitry 604 may include Fast-Fourier Transform (FFT), precoding, or constellation mapping/demapping functionality.
  • FFT Fast-Fourier Transform
  • encoding/decoding circuitry of the baseband circuitry 604 may include convolution, tail-biting convolution, turbo, Viterbi, or Low Density Parity Check (LDPC) encoder/decoder functionality.
  • LDPC Low Density Parity Check
  • encoder/decoder functionality are not limited to these examples and may include other suitable functionality in other embodiments.
  • the baseband circuitry 604 may include one or more audio digital signal processor(s) (DSP) 604F.
  • the audio DSP(s) 604F may include elements for compression/decompression and echo cancellation and may include other suitable processing elements in other embodiments.
  • Components of the baseband circuitry 604 may be suitably combined in a single chip or a single chipset, or disposed on a same circuit board in some embodiments.
  • some or all of the constituent components of the baseband circuitry 604 and the application circuitry 602 may be implemented together such as, for example, on a system on a chip (SOC).
  • SOC system on a chip
  • the baseband circuitry 604 may provide for
  • the baseband circuitry 604 may support communication with an evolved universal terrestrial radio access network (EUTRAN) or other wireless metropolitan area networks (WMAN), a wireless local area network (WLAN), or a wireless personal area network (WPAN).
  • EUTRAN evolved universal terrestrial radio access network
  • WMAN wireless metropolitan area networks
  • WLAN wireless local area network
  • WPAN wireless personal area network
  • Embodiments in which the baseband circuitry 604 is configured to support radio communications of more than one wireless protocol may be referred to as multi-mode baseband circuitry.
  • the RF circuitry 606 may enable communication with wireless networks
  • the RF circuitry 606 may include switches, filters, amplifiers, etc. to facilitate the communication with the wireless network.
  • the RF circuitry 606 may include a receive signal path which may include circuitry to down convert RF signals received from the FEM circuitry 608 and provide baseband signals to the baseband circuitry 604.
  • the RF circuitry 606 may also include a transmit signal path which may include circuitry to up convert baseband signals provided by the baseband circuitry 604 and provide RF output signals to the FEM circuitry 608 for transmission.
  • the receive signal path of the RF circuitry 606 may include a mixer circuitry 606A, an amplifier circuitry 606B and a filter circuitry 606C.
  • the transmit signal path of the RF circuitry 606 may include the filter circuitry 606C and the mixer circuitry 606A.
  • the RF circuitry 606 may also include a synthesizer circuitry 606D for synthesizing a frequency for use by the mixer circuitry 606A of the receive signal path and the transmit signal path.
  • the mixer circuitry 606A of the receive signal path may be configured to down convert RF signals received from the FEM circuitry 608 based on the synthesized frequency provided by synthesizer circuitry 606D.
  • the amplifier circuitry 606B may be configured to amplify the down-converted signals and the filter circuitry 606C may be a low-pass filter (LPF) or band-pass filter (BPF) configured to remove unwanted signals from the down-converted signals to generate output baseband signals.
  • Output baseband signals may be provided to the baseband circuitry 604 for further processing.
  • the output baseband signals may be zero-frequency baseband signals, although this is not a requirement.
  • the mixer circuitry 606A of the receive signal path may comprise passive mixers, although the scope of the embodiments is not limited in this respect.
  • the mixer circuitry 606A of the transmit signal path may be configured to up convert input baseband signals based on the synthesized frequency provided by the synthesizer circuitry 606D to generate RF output signals for the FEM circuitry 608.
  • the baseband signals may be provided by the baseband circuitry 604 and may be filtered by the filter circuitry 606C.
  • the mixer circuitry 606A of the receive signal path and the mixer circuitry 606A of the transmit signal path may include two or more mixers and may be arranged for quadrature downconversion and upconversion, respectively.
  • the mixer circuitry 606A of the receive signal path and the mixer circuitry 606A of the transmit signal path may include two or more mixers and may be arranged for image rejection (e.g., Hartley image rejection).
  • the mixer circuitry 606A of the receive signal path and the mixer circuitry 606A may be arranged for direct downconversion and direct upconversion, respectively.
  • the mixer circuitry 606A of the receive signal path and the mixer circuitry 606A of the transmit signal path may be configured for super-heterodyne operation.
  • the output baseband signals and the input baseband signals may be analog baseband signals, although the scope of the embodiments is not limited in this respect.
  • the output baseband signals and the input baseband signals may be digital baseband signals.
  • the RF circuitry 606 may include analog-to-digital converter (ADC) and digital-to-analog converter (DAC) circuitry and the baseband circuitry 604 may include a digital baseband interface to communicate with the RF circuitry 606.
  • ADC analog-to-digital converter
  • DAC digital-to-analog converter
  • a separate radio IC circuitry may be provided for processing signals for each spectrum, although the scope of the embodiments is not limited in this respect.
  • the synthesizer circuitry 606D may be a fractional-N synthesizer or a fractional N/N+l synthesizer, although the scope of the embodiments is not limited in this respect as other types of frequency synthesizers may be suitable.
  • the synthesizer circuitry 606D may be a delta-sigma synthesizer, a frequency multiplier, or a synthesizer comprising a phase-locked loop with a frequency divider.
  • the synthesizer circuitry 606D may be configured to synthesize an output frequency for use by the mixer circuitry 606A of the RF circuitry 606 based on a frequency input and a divider control input.
  • the synthesizer circuitry 606D may be a fractional N/N+l synthesizer.
  • frequency input may be provided by a voltage controlled oscillator (VCO), although that is not a requirement.
  • VCO voltage controlled oscillator
  • Divider control input may be provided by either the baseband circuitry 604 or the application circuitry 602 (such as an applications processor) depending on the desired output frequency.
  • a divider control input (e.g., N) may be determined from a look-up table based on a channel indicated by the application circuitry 602.
  • the synthesizer circuitry 606D of the RF circuitry 606 may include a divider, a delay- locked loop (DLL), a multiplexer and a phase accumulator.
  • the divider may be a dual modulus divider (DMD) and the phase accumulator may be a digital phase accumulator (DP A).
  • the DMD may be configured to divide the input signal by either N or N+l (e.g., based on a carry out) to provide a fractional division ratio.
  • the DLL may include a set of cascaded, tunable, delay elements, a phase detector, a charge pump and a D-type flip-flop.
  • the delay elements may be configured to break a VCO period up into Nd equal packets of phase, where Nd is the number of delay elements in the delay line.
  • Nd is the number of delay elements in the delay line.
  • the synthesizer circuitry 606D may be configured to generate a carrier frequency as the output frequency, while in other embodiments, the output frequency may be a multiple of the carrier frequency (e.g., twice the carrier frequency, four times the carrier frequency) and used in conjunction with quadrature generator and divider circuitry to generate multiple signals at the carrier frequency with multiple different phases with respect to each other.
  • the output frequency may be a LO frequency (fLO).
  • the RF circuitry 606 may include an IQ/polar converter.
  • the FEM circuitry 608 may include a receive signal path which may include circuitry configured to operate on RF signals received from the one or more antennas 610, amplify the received signals and provide the amplified versions of the received signals to the RF circuitry 606 for further processing.
  • the FEM circuitry 608 may also include a transmit signal path which may include circuitry configured to amplify signals for transmission provided by the RF circuitry 606 for transmission by one or more of the one or more antennas 610.
  • the amplification through the transmit or receive signal paths may be done solely in the RF circuitry 606, solely in the FEM circuitry 608, or in both the RF circuitry 606 and the FEM circuitry 608.
  • the FEM circuitry 608 may include a TX/RX switch to switch between transmit mode and receive mode operation.
  • the FEM circuitry 608 may include a receive signal path and a transmit signal path.
  • the receive signal path of the FEM circuitry 608 may include an LNA to amplify received RF signals and provide the amplified received RF signals as an output (e.g., to the RF circuitry 606).
  • the transmit signal path of the FEM circuitry 608 may include a power amplifier (PA) to amplify input RF signals (e.g., provided by the RF circuitry 606), and one or more filters to generate RF signals for subsequent transmission (e.g., by one or more of the one or more antennas 610).
  • PA power amplifier
  • the PMC 612 may manage power provided to the baseband circuitry 604.
  • the PMC 612 may control power-source selection, voltage scaling, battery charging, or DC-to-DC conversion.
  • the PMC 612 may often be included when the device 600 is capable of being powered by a battery, for example, when the device 600 is included in a UE.
  • the PMC 612 may increase the power conversion efficiency while providing desirable implementation size and heat dissipation characteristics.
  • FIG. 6 shows the PMC 612 coupled only with the baseband circuitry 604.
  • the PMC 612 may be additionally or alternatively coupled with, and perform similar power management operations for, other components such as, but not limited to, the application circuitry 602, the RF circuitry 606, or the FEM circuitry 608.
  • the PMC 612 may control, or otherwise be part of, various power saving mechanisms of the device 600. For example, if the device 600 is in an
  • RRC Connected state where it is still connected to the RAN node as it expects to receive traffic shortly, then it may enter a state known as Discontinuous Reception Mode (DRX) after a period of inactivity. During this state, the device 600 may power down for brief intervals of time and thus save power.
  • DRX Discontinuous Reception Mode
  • the device 600 may transition off to an RRC Idle state, where it disconnects from the network and does not perform operations such as channel quality feedback, handover, etc.
  • the device 600 goes into a very low power state, and it performs paging where again it periodically wakes up to listen to the network and then powers down again.
  • the device 600 may not receive data in this state, and in order to receive data, it transitions back to an RRC Connected state.
  • An additional power saving mode may allow a device to be unavailable to the network for periods longer than a paging interval (ranging from seconds to a few hours). During this time, the device is totally unreachable to the network and may power down completely. Any data sent during this time incurs a large delay, and it is assumed the delay is acceptable.
  • Processors of the application circuitry 602 and processors of the baseband circuitry 604 may be used to execute elements of one or more instances of a protocol stack.
  • processors of the baseband circuitry 604 alone or in combination, may be used to execute Layer 3, Layer 2, or Layer 1 functionality, while processors of the application circuitry 602 may utilize data (e.g., packet data) received from these layers and further execute Layer 4 functionality (e.g., transmission communication protocol (TCP) and user datagram protocol (UDP) layers).
  • Layer 3 may comprise a radio resource control (RRC) layer, described in further detail below.
  • RRC radio resource control
  • Layer 2 may comprise a medium access control (MAC) layer, a radio link control (RLC) layer, and a packet data convergence protocol (PDCP) layer, described in further detail below.
  • Layer 1 may comprise a physical (PHY) layer of a UE/RAN node, described in further detail below.
  • FIG. 7 illustrates example interfaces of baseband circuitry in accordance with some embodiments.
  • the baseband circuitry 604 of FIG. 6 may comprise processors 604A-604E and a memory 604G utilized by said processors.
  • Each of the processors 604A-604E may include a memory interface, 704A-704E, respectively, to send/receive data to/from the memory 604G.
  • the baseband circuitry 604 may further include one or more interfaces to
  • a memory interface 712 e.g., an interface to send/receive data to/from memory external to the baseband circuitry 604
  • an application circuitry interface 714 e.g., an interface to send/receive data to/from the application circuitry 602 of FIG. 6
  • an RF circuitry interface 716 e.g., an interface to send/receive data to/from the RF circuitry 606 of FIG.
  • a wireless hardware connectivity interface 718 e.g., an interface to send/receive data to/from Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components
  • a power management interface 720 e.g., an interface to send/receive power or control signals to/from the PMC 612.
  • FIG. 8 is an illustration of a control plane protocol stack in accordance with some embodiments.
  • a control plane 800 is shown as a communications protocol stack between the UE 501 (or alternatively, the UE 502), the RAN node 511 (or alternatively, the RAN node 512), and the MME 521.
  • a PHY layer 801 may transmit or receive information used by a MAC layer 802 over one or more air interfaces.
  • the PHY layer 801 may further perform link adaptation or adaptive modulation and coding (AMC), power control, cell search (e.g., for initial synchronization and handover purposes), and other measurements used by higher layers, such as an RRC layer 805.
  • the PHY layer 801 may still further perform error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels, modulation/demodulation of physical channels, interleaving, rate matching, mapping onto physical channels, and Multiple Input Multiple Output (MIMO) antenna processing.
  • FEC forward error correction
  • MIMO Multiple Input Multiple Output
  • the MAC layer 802 may perform mapping between logical channels and transport channels, multiplexing of MAC service data units (SDUs) from one or more logical channels onto transport blocks (TB) to be delivered to PHY via transport channels, de-multiplexing MAC SDUs to one or more logical channels from transport blocks (TB) delivered from the PHY via transport channels, multiplexing MAC SDUs onto TBs, scheduling information reporting, error correction through hybrid automatic repeat request (HARQ), and logical channel prioritization.
  • SDUs MAC service data units
  • TB transport blocks
  • HARQ hybrid automatic repeat request
  • An RLC layer 803 may operate in a plurality of modes of operation, including:
  • the RLC layer 803 may execute transfer of upper layer protocol data units (PDUs), error correction through automatic repeat request (ARQ) for AM data transfers, and concatenation, segmentation and reassembly of RLC SDUs for UM and AM data transfers.
  • the RLC layer 803 may also execute re-segmentation of RLC data PDUs for AM data transfers, reorder RLC data PDUs for UM and AM data transfers, detect duplicate data for UM and AM data transfers, discard RLC SDUs for UM and AM data transfers, detect protocol errors for AM data transfers, and perform RLC re-establishment.
  • a PDCP layer 804 may execute header compression and decompression of IP data, maintain PDCP Sequence Numbers (SNs), perform in-sequence delivery of upper layer PDUs at re-establishment of lower layers, eliminate duplicates of lower layer SDUs at re- establishment of lower layers for radio bearers mapped on RLC AM, cipher and decipher control plane data, perform integrity protection and integrity verification of control plane data, control timer-based discard of data, and perform security operations (e.g., ciphering, deciphering, integrity protection, integrity verification, etc.).
  • SNs PDCP Sequence Numbers
  • the main services and functions of the RRC layer 805 may include broadcast of system information (e.g., included in Master Information Blocks (MIBs) or System
  • SIBs Information Blocks related to the non-access stratum (NAS)), broadcast of system information related to the access stratum (AS), paging, establishment, maintenance and release of an RRC connection between the UE and E-UTRAN (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release), establishment, configuration, maintenance and release of point-to-point radio bearers, security functions including key management, inter radio access technology (RAT) mobility, and measurement configuration for UE measurement reporting.
  • Said MIBs and SIBs may comprise one or more information elements (IEs), which may each comprise individual data fields or data structures.
  • IEs information elements
  • the UE 501 and the RAN node 511 may utilize a Uu interface (e.g., an LTE-Uu interface) to exchange control plane data via a protocol stack comprising the PHY layer 801, the MAC layer 802, the RLC layer 803, the PDCP layer 804, and the RRC layer 805.
  • a Uu interface e.g., an LTE-Uu interface
  • non-access stratum (NAS) protocols 806 form the highest stratum of the control plane between the UE 501 and the MME 521.
  • the NAS protocols 806 support the mobility of the UE 501 and the session management procedures to establish and maintain IP connectivity between the UE 501 and the P-GW 523.
  • An SI Application Protocol (Sl-AP) layer 815 may support the functions of the SI interface 513 and comprise Elementary Procedures (EPs).
  • An EP is a unit of interaction between the RAN node 511 and the CN 520.
  • the Sl-AP layer 815 services may comprise two groups: UE-associated services and non UE-associated services. These services perform functions including, but not limited to: E-UTRAN Radio Access Bearer (E-RAB)
  • E-RAB E-UTRAN Radio Access Bearer
  • a stream control transmission protocol (SCTP) layer (alternatively referred to as the stream control transmission protocol/internet protocol (SCTP/IP) layer) 814 may ensure reliable delivery of signaling messages between the RAN node 511 and the MME 521 based, in part, on the IP protocol, supported by an IP layer 813.
  • An L2 layer 812 and an LI layer 811 may refer to communication links (e.g., wired or wireless) used by the RAN 511 node and the MME 521 to exchange information.
  • the RAN node 511 and the MME 521 may utilize an SI -MME interface to exchange control plane data via a protocol stack comprising the LI layer 811, the L2 layer 812, the IP layer 813, the SCTP layer 814, and the Sl-AP layer 815.
  • FIG. 9 is an illustration of a user plane protocol stack in accordance with some embodiments.
  • a user plane 900 is shown as a communications protocol stack between the UE 501 (or alternatively, the UE 502), the RAN node 511 (or alternatively, the RAN node 512), the S-GW 522, and the P-GW 523.
  • the user plane 900 may utilize at least some of the same protocol layers as the control plane 800.
  • the UE 501 and the RAN node 511 may utilize a Uu interface (e.g., an LTE-Uu interface) to exchange user plane data via a protocol stack comprising the PHY layer 801, the MAC layer 802, the RLC layer 803, the PDCP layer 804.
  • a Uu interface e.g., an LTE-Uu interface
  • a General Packet Radio Service (GPRS) Tunneling Protocol for the user plane (GTP- U) layer 904 may be used for carrying user data within the GPRS core network and between the radio access network and the core network.
  • the user data transported can be packets in any of IPv4, IPv6, or PPP formats, for example.
  • a UDP and IP security (UDP/IP) layer 903 may provide checksums for data integrity, port numbers for addressing different functions at the source and destination, and encryption and authentication on the selected data flows.
  • the RAN node 511 and the S-GW 522 may utilize the Sl-U interface 514 to exchange user plane data via a protocol stack comprising the LI layer 811, the L2 layer 812, the UDP/IP layer 903, and the GTP-U layer 904.
  • the S-GW 522 and the P-GW 523 may utilize an S5/S8a interface to exchange user plane data via a protocol stack comprising the LI layer 811, the L2 layer 812, the UDP/IP layer 903, and the GTP-U layer 904.
  • the NAS protocols 806 support the mobility of the UE 501 and the session management procedures to establish and maintain IP connectivity between the UE 501 and the P-GW 523.
  • FIG. 10 illustrates components of a core network in accordance with some embodiments.
  • the components of the CN 520 may be implemented in one physical node or separate physical nodes including components to read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium).
  • network function virtualization NFV is utilized to virtualize any or all of the above described network node functions via executable instructions stored in one or more computer readable storage mediums (described in further detail below).
  • a logical instantiation of the CN 520 may be referred to as a network slice 1001.
  • a logical instantiation of a portion of the CN 520 may be referred to as a network sub- slice 1002 (e.g., the network sub-slice 1002 is shown to include the P-GW 523 and the PCRF 526).
  • FIG. 11 is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein.
  • FIG. 11 shows a diagrammatic representation of hardware resources 1100 including one or more processors (or processor cores) 1110, one or more memory/storage devices 1120, and one or more communication resources 1130, each of which may be communicatively coupled via a bus 1140.
  • node virtualization e.g., NFV
  • a hypervisor 1102 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 1100.
  • the processors 1110 may include, for example, a processor 1112 and a processor 1114.
  • the memory/storage devices 1120 may include main memory, disk storage, or any suitable combination thereof.
  • the memory/storage devices 1120 may include, but are not limited to, any type of volatile or non-volatile memory such as dynamic random access memory (DRAM), static random-access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, etc.
  • DRAM dynamic random access memory
  • SRAM static random-access memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • Flash memory solid-state storage, etc.
  • the communication resources 1130 may include interconnection or network interface components or other suitable devices to communicate with one or more peripheral devices 1104 or one or more databases 1106 via a network 1108.
  • the communication resources 1130 may include wired communication components (e.g., for coupling via a Universal Serial Bus (USB)), cellular communication components, NFC components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components.
  • wired communication components e.g., for coupling via a Universal Serial Bus (USB)
  • cellular communication components e.g., for coupling via a Universal Serial Bus (USB)
  • NFC components e.g., NFC components
  • Bluetooth® components e.g., Bluetooth® Low Energy
  • Wi-Fi® components e.g., Wi-Fi® components
  • Instructions 1150 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 1110 to perform any one or more of the methodologies discussed herein.
  • the instructions 1150 may reside, completely or partially, within at least one of the processors 1110 (e.g., within the processor's cache memory), the memory/storage devices 1120, or any suitable combination thereof.
  • any portion of the instructions 1150 may be transferred to the hardware resources 1100 from any combination of the peripheral devices 1104 or the databases 1106. Accordingly, the memory of processors 1110, the memory/storage devices 1120, the peripheral devices 1104, and the databases 1106 are examples of computer-readable and machine-readable media.
  • Example 1 is an apparatus for a virtualized infrastructure manager (VFM), comprising: a first interface coupled to a network function virtualization infrastructure (NFVI); a second interface coupled to a virtualized network function manager (VNFM); and a processor coupled to the first interface and second interface.
  • the processor configured to: receive a set of virtual processor usage metrics from the NFVI, the set of virtual processor usage metrics covering at least a collection period; generate a virtual processor usage measurement for a virtualized compute resource based at least in part on the virtual processor usage metrics; and report the virtual processor usage measurement to the VNFM.
  • Example 2 is the apparatus of Example 1, wherein the virtual processor usage metrics are sampled virtual processor usage measurements at pre-defined intervals measured by the FVI.
  • Example 3 is the apparatus of Example 1, wherein the virtual processor usage metrics are an arithmetic mean of virtual processor usage samples measured in the collection period by the NFVI.
  • Example 4 is the apparatus of Example 1, wherein the virtual processor usage metrics are for an individual virtual processor or are consolidated for virtual processors of a virtualized compute resource.
  • Example 5 is the apparatus of Example 1, wherein the virtual processor usage metrics are formed by selecting a maximum among virtual processor usage samples measured in a given period.
  • Example 6 is the apparatus of any of Examples 1-5, wherein to generate the virtual processor usage measurement further comprises to generate the virtual processor usage measurement by processing usage metrics of multiple virtual processors and generating a consolidated measurement.
  • Example 7 is the apparatus of any of Examples 1-5, wherein the VIM generates the virtual processor usage measurement by using subcounters for multiple virtual processors.
  • a system for management of virtualized resources comprising a network function.
  • the network function virtualization infrastructure (NFVI) configured to: sample virtual processor usage of a virtualized compute resource; and provide virtual processor usage metrics to a virtualized infrastructure manager (VIM).
  • VFM configured to: process the virtual processor usage metrics from the NFVI; generate a virtual processor usage measurement for the virtualized compute resource based at least in part on the virtual processor usage metrics; and report the virtual processor usage measurement to a virtualized network function manager (VNFM); the VNFM configured to receive the virtual processor usage measurement from the VIM.
  • VNFM virtualized network function manager
  • Example 9 is the system of Example 8, further comprising a virtualized compute resource that includes one or more virtual processors for which the NFVI samples for virtual processor usage individually or collectively.
  • Example 10 is the system of Example 8, wherein the virtual processor usage measurement is a mean virtual processor usage or peak virtual processor usage.
  • Example 11 is the system of Example 8, wherein the virtual processor usage measurement includes a ratio of a first number of usage samples in a usage range to a second number of total usage samples.
  • Example 12 is the system of Example 8, wherein to generate the virtual processor usage measurement further comprises to generate a distribution of the received virtual processor usage metrics per usage level from a set of usage levels.
  • Example 13 is the system of any of Examples 8-11, wherein the virtualized compute resource includes a virtual central processing unit (CPU).
  • the virtualized compute resource includes a virtual central processing unit (CPU).
  • Example 14 is a method of managing of virtualized resources (VRs), the method comprising: receiving a set of virtual processor usage metrics from a network function virtualization infrastructure (NFVI), the set of virtual processor usage metrics covering at least a collection period; generating a virtual processor usage measurement for a virtualized compute resource based at least in part on the set of virtual processor usage metrics; and reporting the virtual processor usage measurement to the virtualized network function manager (VNFM).
  • NFVI network function virtualization infrastructure
  • VNFM virtualized network function manager
  • Example 15 is the method of Example 14, wherein the virtual processor usage metrics are for an individual virtual processor or are consolidated for virtual processors of a virtualized compute resource.
  • Example 16 is the method of Example 14, wherein reporting the virtual processor usage measurement further comprises reporting the virtual processor usage measurement to the VNFM using a notification that the virtual processor usage measurement is available.
  • Example 17 is the method of Example 14, wherein reporting the virtual processor usage measurement further comprises reporting the virtual processor usage measurement to the VNFM using a message including the virtual processor usage measurement value.
  • Example 18 is the method of Example 14, wherein the virtualized compute resource is identified by an object type and an object instance identifier.
  • Example 19 is the method of Example 14, wherein the virtualized compute resource is identified by a compute ID.
  • Example 20 is an apparatus comprising means to perform a method as exemplified in any of Examples 14-18.
  • Example 21 is a machine-readable storage including machine-readable instructions, when executed, to implement a method or realize an apparatus as exemplified in any of Examples 14-18.
  • Example 22 is a machine-readable medium including code, when executed, to cause a machine to perform the method of any one of Examples 14-18.
  • Example 23 is a computer program product comprising a computer-readable storage medium that stores instructions for execution by a processor to perform operations of a virtualized infrastructure manager (VIM), the operations, when executed by the processor, to perform a method, the method comprising: receiving a set of virtual processor usage metrics from a network function virtualization infrastructure (NFVI), the set of virtual processor usage metrics covering at least a collection period; generating a virtual processor usage measurement for a virtualized compute resource based at least in part on the set of virtual processor usage metrics; and reporting the virtual processor usage measurement to the virtualized network function manager (VNFM).
  • VIP virtualized infrastructure manager
  • Example 24 is an apparatus for managing of virtualized network functions (VNFs), the apparatus comprising: means for receiving a set of virtual processor usage metrics from a network function virtualization infrastructure (NFVI), the set of virtual processor usage metrics covering at least a collection period; means for generating a virtual processor usage measurement for a virtualized compute resource based at least in part on the set of virtual processor usage metrics; and means for reporting the virtual processor usage measurement to the virtualized network function manager (VNFM).
  • VNFM virtualized network function manager
  • Example 1 may include the Virtualized Infrastructure Manager (VIM) comprising one or more processors configured to: receive the virtual CPU usage metric from Network Function Virtualization Infrastructure (NFVI); perform processing of the virtual CPU usage metrics that were received within a collection period; generate the virtual CPU usage measurement for a Virtualized Compute Resource from the processing of virtual CPU usage metrics; and report the virtual CPU usage measurements to Virtualized Network Function Manager (VNFM).
  • VNFM Virtualized Infrastructure Manager
  • Additional Example 2 may include the method according to Additional Example 1 and/or some other example herein, wherein the said virtual CPU usage metric is generated by NFVI by sampling the virtual CPU usage at the pre-defined interval.
  • Additional Example 3 may include the method according to Additional Example 1 and/or some other Additional Examples herein, wherein the said virtual CPU usage metric is generated by NFVI by taking the arithmetic mean of the virtual CPU usage samples measured in a given period.
  • Additional Example 4 may include the method according to Additional Example 1 and/or some other Additional Example herein, wherein the said virtual CPU usage metric is generated by FVI by selecting the maximum among the virtual CPU usage samples measured in a given period.
  • Additional Example 5 may include the method according to Additional Example 1 and/or some other Additional Example herein, wherein the said virtual CPU usage metric is generated by NFVI by calculating the distribution of the virtual CPU usage samples measured in collection given period.
  • Additional Example 6 may include the method according to Additional Example 1, 2 and/or 3 and/or some other Additional Example herein, wherein the said virtual CPU usage measurement is the mean virtual CPU usage.
  • Additional Example 7 may include the method according to Additional Example 1, 2 and/or 4 and/or some other Additional Example herein, wherein the said virtual CPU usage measurement is the peak virtual CPU usage.
  • Additional Example 8 may include the method according to Additional Example 1, 2 and/or 5 and/or some other Additional Example herein, wherein the said virtual CPU usage measurement contains the ratio of the number of usage samples that fell into each given usage range to the number of total usage samples of the virtual CPU.
  • Additional Example 9 may include the method according to Additional Example 1 and/or 6 and/or some other Additional Example herein, wherein the said processing is taking arithmetic mean of the received virtual CPU usage metric.
  • Additional Example 10 may include the method according to Additional Example 1 and 7 and/or some other Additional Example herein, wherein the said processing is taking a maximum of the received virtual CPU usage metric.
  • Additional Example 11 may include the method according to Additional Example 1 and 8 and/or some other Additional Example herein, wherein the said processing is generating the distribution of the received virtual CPU usage metric per usage level.
  • Additional Example 12 may include the method according to Additional Example 1 to 11 and/or some other Additional Example herein, wherein the VIM generates the virtual CPU usage measurement by using subcounters for multiple virtual CPUs.
  • Additional Example 13 may include the method according to Additional Example 1 to 11 and/or some other Additional Example herein, wherein the VIM generates the virtual CPU usage measurement by selecting one for multiple virtual CPUs to report.
  • Additional Example 14 may include the method according to Additional Example 1 and/or some other Additional Example herein, wherein the VIM generates the virtual CPU usage measurement by processing the usage metrics of multiple virtual CPUs, and generating a consolidated measurement.
  • Additional Example 15 may include the method according to Additional Example 1 and/or some other Additional Example herein, wherein the VIM reports the virtual CPU usage measurements to VNFM by a notification informing the virtual CPU usage measurements are available.
  • Additional Example 16 may include the method according to Additional Example 1 and/or some other Additional Example herein, wherein the VIM reports the virtual CPU usage measurements to VNFM by an operation including the virtual CPU usage
  • Additional Example 17 may include the method according to Additional Example 1 and/or some other Additional Example herein, wherein the collection interval can be divided into multiple sampling intervals.
  • Additional Example 18 may include the method according to Additional Example 1 and/or some other Additional Example herein, wherein the Virtualized Compute Resource is identified by object type and object instance identifier.
  • Additional Example 19 may include the method according to Additional Example 1 and/or some other Additional Example herein, wherein the Virtualized Compute Resource is identified by the computeld.
  • Additional Example 20 may include the method according to Additional Example 1 and/or some other Additional Example herein, wherein the measurement may contain the measurement name, measurement value and timestamp.
  • Additional Example 21 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of Additional Examples 1- 20, or any other method or process described herein.
  • Additional Example 22 may include one or more non-transitory computer- readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of Additional Examples 1-20, or any other method or process described herein.
  • Additional Example 23 may include an apparatus comprising logic, modules, and/or circuitry to perform one or more elements of a method described in or related to any of
  • Additional Example 24 may include a method, technique, or process as described in or related to any of Additional Examples 1-20, or portions or parts thereof.
  • Additional Example 25 may include an apparatus comprising: one or more processors and one or more computer readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of Additional Examples 1-20, or portions thereof.
  • Additional Example 26 may include a method of communicating in a wireless network as shown and described herein.
  • Additional Example 27 may include a system for providing wireless
  • Additional Example 28 may include a device for providing wireless
  • performance measurement is obtained via a Status Counter (SC): The entity receives a metric at each predetermined interval. A measurement is generated from processing (e.g., arithmetic mean, peak) all of the samples received in the collection period.
  • SC Status Counter
  • mean virtual CPU usage is used.
  • the mean virtual CPU usage can include the following:
  • VIM receives the cpu utilization measurement for the virtual compute instance from
  • NFVI at the pre-defined interval, and then takes the arithmetic mean of the virtual CPU usage metrics received in the collection period.
  • Measurement Unit Each measurement is a real value (Unit: %).
  • Measurement Group VirtualizedComputeResource.
  • objectType is equal to "virtualCompute”
  • objectlnstanceld corresponds to computeld of the virtualized compute resource.
  • peak virtual CPU usage is used.
  • the peak virtual CPU usage can include the following:
  • VIM receives the cpu utilization measurement for the virtual compute instance from FVI at the pre-defined interval, and then selects the maximum metric among the virtual CPU usage metrics received in the collection period.
  • Measurement Unit Each measurement is a real value (Unit: %).
  • Measurement Group VirtualizedComputeResource.
  • objectType is equal to "virtualCompute”
  • objectlnstanceld corresponds to computeld of the Virtualized Compute Resource.
  • Embodiments and implementations of the systems and methods described herein may include various operations, which may be embodied in machine-executable instructions to be executed by a computer system.
  • a computer system may include one or more general- purpose or special-purpose computers (or other electronic devices).
  • the computer system may include hardware components that include specific logic for performing the operations or may include a combination of hardware, software, and/or firmware.
  • Suitable networks for configuration and/or use as described herein include one or more local area networks, wide area networks, metropolitan area networks, and/or Internet or IP networks, such as the World Wide Web, a private Internet, a secure Internet, a value-added network, a virtual private network, an extranet, an intranet, or even stand-alone machines which communicate with other machines by physical transport of media.
  • a suitable network may be formed from parts or entireties of two or more other networks, including networks using disparate hardware and network communication technologies.
  • One suitable network includes a server and one or more clients; other suitable networks may contain other combinations of servers, clients, and/or peer-to-peer nodes, and a given computer system may function both as a client and as a server.
  • Each network includes at least two computers or computer systems, such as the server and/or clients.
  • a computer system may include a workstation, laptop computer, disconnectable mobile computer, server, mainframe, cluster, so-called “network computer” or "thin client,” tablet, smart phone, personal digital assistant or other hand-held computing device, "smart” consumer electronics device or appliance, medical device, or a combination thereof.
  • Suitable networks may include communications or networking software, such as the software available from Novell®, Microsoft®, and other vendors, and may operate using TCP/IP, SPX, IPX, and other protocols over twisted pair, coaxial, or optical fiber cables, telephone lines, radio waves, satellites, microwave relays, modulated AC power lines, physical media transfer, and/or other data transmission "wires" known to those of skill in the art.
  • the network may encompass smaller networks and/or be connectable to other networks through a gateway or similar mechanism.
  • Various techniques, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD- ROMs, hard drives, magnetic or optical cards, solid-state memory devices, a nontransitory computer-readable storage medium, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the various techniques.
  • the computing device may include a processor, a storage medium readable by the processor (including volatile and nonvolatile memory and/or storage elements), at least one input device, and at least one output device.
  • the volatile and nonvolatile memory and/or storage elements may be a RAM, an EPROM, a flash drive, an optical drive, a magnetic hard drive, or other medium for storing electronic data.
  • the eNB (or other base station) and UE (or other mobile station) may also include a transceiver component, a counter component, a processing component, and/or a clock component or timer component.
  • One or more programs that may implement or utilize the various techniques described herein may use an application programming interface (API), reusable controls, and the like. Such programs may be implemented in a high-level procedural or an object-oriented programming language to communicate with a computer system. However, the program(s) may be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language, and combined with hardware implementations.
  • Each computer system includes one or more processors and/or memory; computer systems may also include various input devices and/or output devices.
  • the processor may include a general purpose device, such as an Intel®, AMD®, or other "off-the-shelf microprocessor.
  • the processor may include a special purpose processing device, such as ASIC, SoC, SiP, FPGA, PAL, PLA, FPLA, PLD, or other customized or programmable device.
  • the memory may include static RAM, dynamic RAM, flash memory, one or more flip-flops, ROM, CD-ROM, DVD, disk, tape, or magnetic, optical, or other computer storage medium.
  • the input device(s) may include a keyboard, mouse, touch screen, light pen, tablet, microphone, sensor, or other hardware with accompanying firmware and/or software.
  • the output device(s) may include a monitor or other display, printer, speech or text synthesizer, switch, signal line, or other hardware with accompanying firmware and/or software.
  • a component may be implemented as a hardware circuit comprising custom very large scale integration (VLSI) circuits or gate arrays, or off-the-shelf semiconductors such as logic chips, transistors, or other discrete components.
  • VLSI very large scale integration
  • a component may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.
  • Components may also be implemented in software for execution by various types of processors.
  • An identified component of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, a procedure, or a function. Nevertheless, the executables of an identified component need not be physically located together, but may comprise disparate instructions stored in different locations that, when joined logically together, comprise the component and achieve the stated purpose for the component.
  • a component of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices.
  • operational data may be identified and illustrated herein within components, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
  • the components may be passive or active, including agents operable to perform desired functions.
  • a software module or component may include any type of computer instruction or computer-executable code located within a memory device.
  • a software module may, for instance, include one or more physical or logical blocks of computer instructions, which may be organized as a routine, program, object, component, data structure, etc., that perform one or more tasks or implement particular data types. It is appreciated that a software module may be implemented in hardware and/or firmware instead of or in addition to software.
  • One or more of the functional modules described herein may be separated into sub-modules and/or combined into a single or smaller number of modules.
  • a particular software module may include disparate instructions stored in different locations of a memory device, different memory devices, or different computers, which together implement the described functionality of the module.
  • a module may include a single instruction or many instructions, and may be distributed over several different code segments, among different programs, and across several memory devices.
  • Some embodiments may be practiced in a distributed computing environment where tasks are performed by a remote processing device linked through a communications network.
  • software modules may be located in local and/or remote memory storage devices.
  • data being tied or rendered together in a database record may be resident in the same memory device, or across several memory devices, and may be linked together in fields of a record in a database across a network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Hardware Design (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

L'invention concerne un gestionnaire d'infrastructure virtualisée (VIM) qui traite des métriques d'utilisation pour une infrastructure de virtualisation de fonction de réseau (NFVI) et rapporte des métriques d'utilisation de processeur virtuel à un gestionnaire de fonction de réseau virtualisé (VNFM). Ces mesures peuvent permettre au VNFM de mettre à l'échelle de manière proactive des fonctions de réseau virtualisé (VNF) ou des composants de fonction de réseau virtualisé (VNFC) pour donner plus de ressources à des VNF/VNFC qui démontrent un besoin tout en déplaçant des ressources à partir de VNF/VNFC qui présentent une capacité de réserve. L'utilisation d'un processeur virtuel individuel (également appelé CPU virtuel) qui fait partie d'une ressource informatique virtualisée, ou l'utilisation consolidée de tous les processeurs virtuels d'une ressource informatique virtualisée, est surveillée par un VNFM aux fins décrites ci-dessus.
PCT/US2017/064538 2016-12-05 2017-12-04 Systèmes, procédés et dispositifs de rapport d'utilisation de processeur virtuel de fonction de réseau virtuel dans des réseaux cellulaires WO2018106604A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662430188P 2016-12-05 2016-12-05
US62/430,188 2016-12-05

Publications (1)

Publication Number Publication Date
WO2018106604A1 true WO2018106604A1 (fr) 2018-06-14

Family

ID=61054475

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/064538 WO2018106604A1 (fr) 2016-12-05 2017-12-04 Systèmes, procédés et dispositifs de rapport d'utilisation de processeur virtuel de fonction de réseau virtuel dans des réseaux cellulaires

Country Status (1)

Country Link
WO (1) WO2018106604A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111831609A (zh) * 2020-06-18 2020-10-27 中国科学院数据与通信保护研究教育中心 一种虚拟化环境中二进制文件度量值统一管理和分发的方法和系统
EP3616362A4 (fr) * 2017-04-24 2021-01-13 Apple Inc. Performance d'infrastructure de virtualisation de fonction de réseau
WO2025074437A1 (fr) * 2023-10-07 2025-04-10 Jio Platforms Limited Procédé et système de surveillance d'utilisation de ressources par des composants de nœud de réseau

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205004A1 (en) * 2015-01-13 2016-07-14 Intel IP Corporation Techniques for Monitoring Virtualized Network Functions or Network Functions Virtualization Infrastructure

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205004A1 (en) * 2015-01-13 2016-07-14 Intel IP Corporation Techniques for Monitoring Virtualized Network Functions or Network Functions Virtualization Infrastructure

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3616362A4 (fr) * 2017-04-24 2021-01-13 Apple Inc. Performance d'infrastructure de virtualisation de fonction de réseau
US11243798B2 (en) 2017-04-24 2022-02-08 Apple Inc. Network function virtualization infrastructure performance
CN111831609A (zh) * 2020-06-18 2020-10-27 中国科学院数据与通信保护研究教育中心 一种虚拟化环境中二进制文件度量值统一管理和分发的方法和系统
CN111831609B (zh) * 2020-06-18 2024-01-02 中国科学院数据与通信保护研究教育中心 虚拟化环境中二进制度量值统一管理和分发的方法和系统
WO2025074437A1 (fr) * 2023-10-07 2025-04-10 Jio Platforms Limited Procédé et système de surveillance d'utilisation de ressources par des composants de nœud de réseau

Similar Documents

Publication Publication Date Title
US11019538B2 (en) Systems, methods and devices for legacy system fallback in a cellular communications system
US11122453B2 (en) Systems, methods and devices for measurement configuration by a secondary node in EN-DC
US10917806B2 (en) Measurement job creation and performance data reporting for advanced networks including network slicing
US10833957B2 (en) Managing physical network function instances in a network service instance
US10749587B2 (en) Systems, methods and devices for using S-measure with new radio
US11063844B2 (en) Systems, methods and devices for virtual resource metric management
US20200110627A1 (en) Centralized unit and distributed unit connection in a virtualized radio access network
WO2018009340A1 (fr) Systèmes, procédés et dispositifs de séparation de plan utilisateur de commande pour des réseaux d'accès radio 5g
EP3639450B1 (fr) Indexation de blocs de ressources physiques pour la coexistence d'une bande étroite, d'une agrégation de porteuses et d'un équipement utilisateur à large bande dans une nouvelle radio
WO2018128875A1 (fr) Instanciation et gestion de fonctions de réseau physiques et virtualisées d'un nœud de réseau d'accès radio
WO2018063998A1 (fr) Systèmes, procédés et dispositifs pour interface divisée mac-phy
US11265884B2 (en) Systems, methods and devices for uplink bearer and access category mapping
WO2018118788A1 (fr) Rapport de combinaisons de capacités cellulaires prises en charge d'un dispositif utilisateur mobile
WO2018085459A1 (fr) Signalement d'une prise en charge portant sur un petit espace commandé par réseau, ncsg, permettant une commande d'interruption
WO2018175176A1 (fr) Systèmes, procédés et dispositifs pour sélectionner des configurations de synchronisation cellulaire
WO2018031138A1 (fr) Mesure et signalement de faisceaux dans des réseaux cellulaires
WO2018128894A1 (fr) Systèmes, procédés et dispositifs de notification d'alarme dans une infrastructure de virtualisation de fonction de réseau
WO2018106604A1 (fr) Systèmes, procédés et dispositifs de rapport d'utilisation de processeur virtuel de fonction de réseau virtuel dans des réseaux cellulaires
WO2018102098A1 (fr) Systèmes, procédés et dispositifs de gestion d'état de tampon de harq
WO2018085029A1 (fr) Commutation de srs vers une tdd-cc cible dans un système de communication sans fil fondé sur une agrégation de porteuses
US11012883B2 (en) Measurement job suspension and resumption in network function virtualization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17835891

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17835891

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载