US20170371717A1 - Resource management in cloud systems - Google Patents
Resource management in cloud systems Download PDFInfo
- Publication number
- US20170371717A1 US20170371717A1 US15/540,436 US201515540436A US2017371717A1 US 20170371717 A1 US20170371717 A1 US 20170371717A1 US 201515540436 A US201515540436 A US 201515540436A US 2017371717 A1 US2017371717 A1 US 2017371717A1
- Authority
- US
- United States
- Prior art keywords
- resource
- resources
- virtual
- tenant
- affinity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/72—Admission control; Resource allocation using reservation actions during connection setup
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/80—Actions related to the user profile or the type of traffic
- H04L47/805—QOS or priority aware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1074—Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45583—Memory management, e.g. access or allocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Definitions
- the present technology relates to a method and apparatus for allocating at least one virtual resource to a physical and/or software resource from a plurality of physical and/or software resources.
- the present technology relates to tenant affinity for resource management in telco cloud systems.
- NFV Network Functions Virtualization
- a VNF can be composed by one or several virtual machines (VMs) and virtual networks, which together implement the network function. These VMs and virtual networks are commonly referred as virtualized resources in the current invention.
- Physical clustering in this context means that a predefined set of computing and storage resources is exclusively assigned to application software from a specific vendor, also referred to as “tenant” within this document. Thus, application software from another vendor cannot use these resources, even if they are free. There are two main reasons for such “physical clustering”:
- vendors may not want their VMs to be collocated on shared physical or hypervisor software resources with other VMs from other vendors (tenants) for security reasons due to, for example, the possibility to exploit hypervisor or VM bugs to eavesdrop traffic from a VM from a competing vendor.
- Performance vendors (tenants) want to guarantee that the performance of their VNFs (hence the underlying VMs) is predictable, and thus, a mal-functioning of a VM from a second vendor may impact the VMs from the first vendor. It is also easy for vendors to track and analyze failure reasons once their own VMs experience failures.
- Waste of data center resources The pre-provisioning of resource clusters can lead to a waste of resources, especially if the pre-provisioning is done in an inappropriate way such that it later does not match with the actual requirements of the tenants. Specifically, the actual use of resources in such a cluster of resources can vary dynamically due to diverse factors such as traffic load, failures, etc.
- the term “software” refers to one of the following: First, software that may be shared by different tenants and which is part of the system infrastructure, e.g. software in the form of hypervisor software; second, software that is provided by a tenant to perform application or service functionality, specifically network functions. If necessary and if not otherwise apparent from the context, explicit reference is made to “hypervisor software” vs. “application software” or “VNF software” to differentiate the two distinct uses of the term.
- FIG. 2 illustrates an example of the physical clustering problem outlined above.
- Four different clusters are created and assigned to their respective vendors (tenants), namely T-A, T-B, T-C and T-D.
- the Figure also shows how VMs are allocated to certain physical hosts (servers), and how, depending on traffic loads, etc., the physical clustering can lead to wasting resources within each cluster.
- the pool of resources in the data center that could have been shared is now fragmented into different clusters impeding the allocation of resources to vendors/tenants other than the owner/user of the cluster.
- VSphere allows several types of affinity to be defined:
- the last type of affinity is the one used to partition resources in the data center and create resource clusters, as the ones shown in FIG. 2 .
- These clusters of specific physical hosts can then be assigned to tenants (VM DRS group).
- VM DRS group can then be assigned to tenants (VM DRS group).
- creation is made offline (pre-planned) and ahead to receive VM allocation requests, i.e., the allocation has already been designated beforehand. Therefore, it does not solve the issue of resources waste.
- the data center schema in FIG. 2 and other figures specifies four virtual machines (VMs) per physical server. This configuration has been chosen for illustration and may vary in actual embodiments, i.e. there may be one or more virtual machines per physical server.
- affinity rules can be very complex and labor-intensive, especially in data center configurations with a numerous tenants that have diverse workloads with different requirements.
- a matching system which could be a cloud management system, collects information about available cloud appliances, which could be physical hosts or servers in the data center, and matches these appliances with user requested services. Such requested services are applications deployed on a number of VMs.
- the matching system can also track and manage resources, so users can have specific rights and assigned resources are made available to the users.
- U.S. Pat. No. 8,402,139 B2 does not solve the problem of allocating resources based on expressed and required affinities in comparison with other tenant requests.
- the main difference of the solution is that it allows defragmenting the cloud/data center resources, which is related to the technology disclosed herein, but in this case, the allocation is based on VM loads and load priorities of the VMs as expressed in the requests by the customer. Therefore, it does not consider explicit inter-tenant affinity information.
- a method of allocating by a virtualized resource management engine at least one virtual resource to a physical and/or software resource from a plurality of physical and/or software resources comprising the step of:
- affinity information as a parameter of a resource allocation request, specifying whether or not said requested virtual resource may be collocated on the same physical and/or software resource with one or more virtual resources of another tenant different from said first tenant;
- This method has the effect and advantage that resources in the data center may be allocated without having to pre-plan and/or pre-allocate physical and hypervisor software resources to tenants (vendors). Without such pre-allocation, fewer resources are needed in the data center, as the pool of resources is statistically shared among tenants while at the same time affinity information, expressing constraints on the placement of virtual machines of different tenants, are taken into account when determining the allocation. Therefore, savings in infrastructure resources and capital expenditure are possible, as illustrated in FIG. 4 , which at the same time also reduce the operational cost due to the reduced amount of required resources.
- Allocating of the at least one virtual resource based on said affinity information has the effect and advantage that fragmenting resources by a tenant is avoided. This gives more flexibility to perform such fragmentation based on other parameters or resource capabilities, e.g., the type of resources (if some specific hardware acceleration is available on certain hosts), quality of the resources, resiliency levels, etc.
- the method includes an intermediate step to obtain information related to the current allocation of virtual resources to the plurality of physical and software resources.
- the virtualized resource management engine is part of an entity responsible for the virtualized infrastructure management of virtual, physical, and software resources in a data center or cloud infrastructure.
- the implementation of the application software does not have to consider how to establish and enforce affinity constraints when being deployed in a specific cloud environment.
- An entity responsible for the virtualized infrastructure management may, for example, be the VIM in the NFV Architectural Framework.
- a signaling entity of the information used to identify a first tenant and the affinity information is an entity responsible for the management of virtual network functions.
- An entity responsible for the management of virtual network functions may, for example, be the VNFM in the NFV Architectural Framework.
- the signaling is forwarded through an entity responsible for orchestration of resources and virtual network functions.
- An entity that is responsible for the orchestration of resources and virtual network functions may, for example, be the NFVO in the NFV Architectural Framework.
- the method includes an intermediate step to discover the affinity information based on information received to identify the first tenant.
- the method includes an intermediate step to discover the affinity information based on information received to identify the first tenant, wherein the signaling entity of the information used to identify the first tenant is an entity responsible for the management of virtual network functions, and wherein the discovery of the information related to the affinity based on the information used to identify the first tenant is performed by an entity responsible for the orchestration of resources and virtual network functions, and wherein signaling of information used to identify the first tenant and the information related to the affinity is performed by an entity responsible for the orchestration of resources and virtual network functions.
- tenants which control the entity responsible for the management of virtual network functions, are allowed to decide on a per VNF deployment/operation case how such VNF and the corresponding virtualized resources should be deployed in terms of being or not collocated with virtualized resources of other tenants, furthermore vendors and network operators may have different VNF provisioning strategies under different situations, like traffic load in the data center, priority of their VNFs, and/or additional network service and resource policy constraints; finally the signaling protocol may be more efficient and it allows an entity other than the tenant (vendor) to determine the specific tenant affinity information.
- the information related to the affinity is part of a policy and the affinity information is signaled as part of the setup process of the policy.
- the process of allocating said at least one virtual resource to a physical and/or software resource from a plurality of physical and/or software resources is part of management operations wherein management operations preferably include the first instantiation of a virtualized deployment, or the full or partial scaling out, migration or healing of virtualized resources of an existing virtualized deployment.
- the at least one virtual resource is a virtual machine (VM) to run on a hypervisor or a virtual application container to run on an operating system, or a virtual disk drive for storage.
- VM virtual machine
- the allocation of virtual resources is provided for a virtualized deployment, wherein the virtualized deployment is a virtual network function (VNF).
- VNF virtual network function
- the affinity information can take multiple values to cover different allocation cases, preferably including one or more of anti-affinity to specific tenants, affinity to specific tenants, affinity to virtual resources which are compute, storage or network intensive.
- affinity information can express affinity or anti-affinity to a certain part or a whole set of vendors, or that affinity information can express affinity or anti-affinity to collocate virtualized resources with certain capabilities.
- an apparatus for allocating by a virtualized resource management engine at least one virtual resource to a physical and/or software resource from a plurality of physical and/or software resources comprising: a module for obtaining affinity information as a parameter of a resource allocation request, specifying whether or not said requested virtual resource may be collocated on the same physical and/or software resource with one or more virtual resources of another tenant different from said first tenant;
- a module designed to allocate the at least one virtual resource based on the affinity information.
- the apparatus further comprises a module for obtaining information related to the current allocation of virtual resources to the plurality of physical and software resources.
- the apparatus further comprises a module for allocating said at least one virtual resource based on said affinity information.
- FIG. 1 shows an architectural overview over the functional building blocks of an architectural framework for network function virtualization, in particular the ETSI NFV E2E Architectural Framework.
- FIG. 2 shows a conceptual schema of a data center and an example of the physical clustering problem.
- FIG. 3 illustrates an example of the tenant-affinity use and its effect when using it together with VM-affinity rules.
- FIG. 4 shows a conceptual scheme of a data center according to the prior art based on DRS clusters and a conceptual scheme of a data center with allocations according to one embodiment based on inter-tenant affinity values.
- FIG. 5 shows an embodiment of the method to allocate virtualized resources based on tenant-affinity information, where the inter tenant-affinity value is “1”.
- FIG. 6 shows an embodiment of the method to allocate virtualized resources based on tenant-affinity information, where the inter tenant-affinity value is “0”.
- FIG. 7 shows the signaling and information flow between functional blocks of the NFV architecture framework when performing the first exemplary embodiment of the method to allocate virtualized resources based on tenant-affinity information.
- FIG. 8 shows the signaling and information flow between functional blocks of the NFV architecture framework when performing the second exemplary embodiment of the method to allocate virtualized resources based on tenant-affinity information.
- FIG. 9 shows the signaling and information flow between functional blocks of the NFV architecture framework when performing the third exemplary embodiment of the method to allocate virtualized resources based on tenant-affinity information.
- FIG. 10 shows the signaling and information flow between functional blocks of the NFV architecture framework when performing the fourth exemplary embodiment of the method to allocate virtualized resources based on tenant-affinity information.
- ETSI NFV has defined an NFV Architectural Framework, which focuses on the new functional blocks and reference points brought by the virtualization of an operator's network. An overview of the NFV Architectural Framework is shown in FIG. 1 .
- the NFV Architectural Framework describes the functional blocks and the reference points in between such functional blocks.
- the split of functionalities and the declared reference points support the management and orchestration of VNFs 101 in a multi-vendor ecosystem.
- the framework provides the required split of functionalities to ensure that the VNF software can be decoupled from the underlying infrastructure.
- VNF vendors and implementers become actual tenants on using the infrastructure likely managed by another entity, like for instance a mobile network operator.
- This infrastructure is composed of computing, storage and network resources placed in one or several data centers.
- the infrastructure is also meant to be shared: by using virtualization techniques, several VMs can be allocated and run on a single physical server.
- the technology disclosed herein mainly deals with the following functional blocks of the NFV Architectural Framework which are shown in FIG. 1 :
- VNFMs can interact directly with the VIM (e.g. CMS) to request management of virtualized resources as part of the deployment and management of VNFs.
- VIM e.g. CMS
- An example for such an interaction is a capacity extension for a deployed VNF: this extension can consist of the VNFM requesting additional VMs from the CMS that are then added to the VNF.
- the teachings of the present disclosure tackle the following problem in the context of the NFV Architectural Framework: Given a multi-vendor VNF scenario, with VNFs coming from different vendors, each with their particular resource requirements, how can one ensure that physical clustering of resources can be avoided, thus guaranteeing better statistical gains on sharing resources among different vendors?
- the technology disclosed herein is based on declaring explicit affinity rules based on tenant/vendor information. By declaring such information, the virtualized resource manager engine (part of a Cloud Management System, or of a VIM) can then allocate virtualized resources (e.g., VMs) as part of a virtualized deployment (e.g., VNF) without having to pre-plan in advance the partitioning of physical and software resources in the data center.
- virtualized resources e.g., VMs
- VNF virtualized deployment
- the tenant-affinity parameter which is referred to in the claims as affinity information, gives information whether the virtualized resources requested by the tenant (vendor) can or cannot be collocated on the same physical and/or software resources with other virtualized resources from other tenants (vendors).
- the tenant-affinity is a parameter, which is different from other affinity parameters known in the state of the art, e.g., those described in the background section as offered by VMware's DRS.
- FIG. 3 explains and illustrates the tenant affinity parameter according to an embodiment and how the use of the tenant-affinity information is complementary to VM-affinity information (i.e. VM-to-VM affinity as defined above) as used in the state of the art.
- This VM-affinity information refers only to affinity among selected VMs, not servers.
- FIG. 3 shows an example with only two selected VMs, the same principle can also be applied to more than two selected VMs, which are to be allocated.
- FIG. 4 compares the prior art (based on DRS clusters) and the technology according to the present disclosure.
- the left hand side of the figure shows the above described “physical clustering” approach.
- the right hand side shows an example of using the tenant-affinity parameter.
- an affinity value of “1” means that resources cannot be shared among tenants, thus virtualized resources with this value can only share servers with other virtualized resources from the same tenant.
- FIG. 5 illustrates an example where the inter-tenant-affinity value is “1”. This means that VMs cannot be collocated with VMs from other tenants.
- FIG. 6 illustrates an example where the inter-tenant-affinity value is “0”.
- the request is that VMs can be allocated and collocated with VMs from other tenants.
- the method illustrated in FIGS. 5 and 6 comprises the following four main steps:
- Step 1 a request to allocate one or more than one virtualized resources (for simplicity, it is assumed that such resources are virtual machines, VM) is performed.
- a request includes information (parameters) that identify the tenant issuing such a request and the tenant-affinity value per virtualized resource.
- Step 2 a virtualized resource management engine 511 collects the input information from the request of step 1 . Furthermore, it may collect additional information (either stored or retrieved from another entity) about the current placement of virtualized resources on the pool of shared physical and software resources in the data center. This additional information contains for each identified physical host (identified by a “server-id” parameter 521 ) at least the following information elements in the table shown with the examples in FIGS. 5 and 6 :
- the virtualized resource management engine issues an allocation request to the hypervisor or virtual machine manager for the selected servers/hosts to allocate the virtualized resources (e.g., VMs) in the data center (cloud infrastructure) 512 .
- FIGS. 7 to 10 illustrate exemplary information flows flow between functional blocks of the NFV architecture framework when performing the exemplary embodiments of the method to allocate virtualized resources based on tenant-affinity information, which are described in the following as embodiment 1 to embodiment 4 respectively.
- embodiments A to embodiment D include making use of the invention during resource management operations such as scaling-out a virtualized deployment (e.g., a VNF), or during partial or full migration of virtualized resources, or during partial or full healing of a virtualized deployment.
- the tenant-affinity parameter could be extended not only to hold one of the binary values of “0” and “1” that have been used as example up to now, but rather to hold a value from a set with more than two values. All these embodiments are summarized and explained in the following sections.
- the first set of embodiments 1 to 4 covers the usage of the invention during the resource allocation request procedure:
- Embodiment 1 is the main and basic embodiment that has been used as example throughout the above text.
- the resource allocation request includes in addition to existing parameters (like the specific resource requirements, and possibly reservation information) the identification of the tenant (tenant-id) and the tenant-affinity per virtualized resource requested as presented in this disclosure.
- the resource request is made by a VNFM and issued against the VIM as in step S 701 .
- the mapping of the tenant-affinity and handling such a requirement during the selection of resources is realized by the VIM.
- the sequence of steps and the signaling between functional blocks according to embodiment 1 is illustrated in FIG. 7 .
- Embodiment 2 is another embodiment which also aims at the signaling of the tenant affinity and the tenant-id as part of the allocation request, however in this case it is made indirectly through the NFVO (as shown in steps S 801 and S 802 ) instead of directly between VNFM and VIM as outlined in embodiment 1.
- the tenant-affinity information is still signaled by the VNFM.
- the NFVO can also map the resource request by the VNFM to a particular reservation.
- the sequence of steps and the signaling between functional blocks according to embodiment 2 is illustrated in FIG. 8 .
- Embodiment 3 differs from embodiment 2 in that not all information needs to be signaled from the VNFM. Part of the information is rather derived by the NFVO which maps the tenant-id from the resource request from the VNFM.
- the NFVO here keeps internal information that allows it to derive the tenant affinity information.
- the NFVO can also map the resource request by the VNFM to a particular reservation. Then the NFVO can proceed with signaling the resource allocation request to the VIM (as in step S 902 ) similarly to embodiment 2.
- the sequence of steps and the signaling between functional blocks according to embodiment 3 is illustrated in FIG. 9 .
- the signaling of the tenant-affinity information is part of a policy creation process.
- the NFVO is the issuer of the “create policy”
- the VIM is the entity keeping such a policy.
- a policy creation request contains information about the tenant identifier (tenant-id), the tenant-affinity parameter and the class or list of classes of VNFs ([vnf-class]) from the tenant that should follow such affinity placement requirement.
- the parameter notation uses square brackets “[” “]” to indicate that one or a list of values may be specified.
- the VIM stores such information which can be used later on to take allocation decisions.
- the VNFM can directly issue a resource allocation request (step S 1003 ) which only needs to specify the resource requirements and the type or class of the VNF (vnf-class) for such a resource allocation. Then, the VIM maps such information with that contained in the policies and determines the resource allocation accordingly.
- the second set of embodiments relate to different types of resource operations like scaling the capacity of a VNF, or partially or fully migrating virtual machines of a VNF from one physical host to another for which such tenant-affinity can be used, or partially or fully healing a VNF.
- These embodiments are thus orthogonal to the first set of embodiments:
- the first set describes different ways to implement the signaling procedure to support tenant-affinity related information being passed through different functional blocks within the NFV Architecture Framework;
- the second set describes different operations on the virtualized resources that can be supported.
- the features of embodiments from both sets of embodiments may be combined.
- Embodiment A uses the tenant-affinity information as part of an actual virtualized resource allocation request during the new instantiation process of a VNF (virtualized deployment). This is the example that has been used in this description so far.
- the VNF should be scaled-out, e.g. by adding more virtual machines to this VNF.
- This scale-out procedure thus also requires the allocation of new resources and tenant-affinity information is used to ensure proper instantiation of such resources.
- new virtualized resources may be requested as part of such VNF, or expansion on the existing ones, for example, allocation of more vCPUs or virtual memory to an existing virtualized resource (VM).
- VM virtualized resource
- a scale-in procedure in which the capacity of a VNF is reduced, might need tenant-affinity information. Examples are the case that the VIM wants to decide which VM to remove first, or for the case that a VM in the wake of resource consolidation after scale-in should be migrated (as described in the following embodiment C).
- Embodiment C assumes a migration scenario, i.e. either the complete VNF or parts of it are to be migrated to different servers within or among datacenters. This is feasible with standard virtual machine migration technologies as commonly used in datacenters.
- the tenant-affinity information is used to determine to which servers the VMs of a VNF can or cannot be migrated.
- Embodiment D covers virtualized resource healing (failure recovery) of the VNF, either for the complete VNF or for parts of it.
- An example here is the failure of certain VMs of a VNF that then need to be redeployed on new servers.
- the tenant-affinity information is used to determine suitable candidate servers for such a re-deployment.
- the possible values of the tenant-affinity parameter are varied. Either they are binary as described up to now, or they take different values from a pre-defined value set.
- the tenant-affinity parameter is a binary value that determines if virtualized resources can be collocated with virtualized resources from other tenants or not: If the parameter is equal to “0”, the virtualized resources can be collocated on shared physical and software resources in the data center with other virtualized resources from other tenants; whereas if this parameter is equal to “1”, the virtualized resources cannot be collocated with those from other tenants. This is the embodiment that has been described in the above text.
- the tenant-affinity parameter can take values from a value set, wherein the different values denote information to affinity or anti-affinity to a certain part of or a whole set of tenants (vendors). For instance,
- the tenant-affinity parameter can take values from a value set with more than two values, wherein the different values denote information to affinity or anti-affinity to collocated virtualized resources with certain capabilities. For instance,
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Stored Programmes (AREA)
Abstract
Description
- The present technology relates to a method and apparatus for allocating at least one virtual resource to a physical and/or software resource from a plurality of physical and/or software resources. In particular, the present technology relates to tenant affinity for resource management in telco cloud systems.
- Network Functions Virtualization (NFV) is an approach to deliver communication services. NFV applies virtualization and automation techniques from IT to move the current network functions (e.g. Firewall, DPI, Serving Gateway, . . . ) in an operator's network from dedicated hardware to general purpose IT infrastructure. These transformed network functions are known as Virtual Network Functions (VNF). A VNF can be composed by one or several virtual machines (VMs) and virtual networks, which together implement the network function. These VMs and virtual networks are commonly referred as virtualized resources in the current invention.
- One of the problems that are addressed by the present invention is the waste of resources in the data center due to physical clustering of resources. “Physical clustering” in this context means that a predefined set of computing and storage resources is exclusively assigned to application software from a specific vendor, also referred to as “tenant” within this document. Thus, application software from another vendor cannot use these resources, even if they are free. There are two main reasons for such “physical clustering”:
- A. Security: vendors (tenants) may not want their VMs to be collocated on shared physical or hypervisor software resources with other VMs from other vendors (tenants) for security reasons due to, for example, the possibility to exploit hypervisor or VM bugs to eavesdrop traffic from a VM from a competing vendor.
- B. Performance: vendors (tenants) want to guarantee that the performance of their VNFs (hence the underlying VMs) is predictable, and thus, a mal-functioning of a VM from a second vendor may impact the VMs from the first vendor. It is also easy for vendors to track and analyze failure reasons once their own VMs experience failures.
- “Physical clustering” can lead to two main issues:
- 1. Waste of data center resources: The pre-provisioning of resource clusters can lead to a waste of resources, especially if the pre-provisioning is done in an inappropriate way such that it later does not match with the actual requirements of the tenants. Specifically, the actual use of resources in such a cluster of resources can vary dynamically due to diverse factors such as traffic load, failures, etc.
- 2. Stronger dependency on vendor/tenant requirements: Physical clustering of resources in the data center still couples the procurement of the VNF software to the infrastructure hardware. Such a behavior strongly contradicts one of the initial motivations to introduce virtualization which aims to abstract the physical/hardware resources from the application software running on them.
- Throughout this disclosure, the term “software” refers to one of the following: First, software that may be shared by different tenants and which is part of the system infrastructure, e.g. software in the form of hypervisor software; second, software that is provided by a tenant to perform application or service functionality, specifically network functions. If necessary and if not otherwise apparent from the context, explicit reference is made to “hypervisor software” vs. “application software” or “VNF software” to differentiate the two distinct uses of the term.
-
FIG. 2 illustrates an example of the physical clustering problem outlined above. Four different clusters are created and assigned to their respective vendors (tenants), namely T-A, T-B, T-C and T-D. The Figure also shows how VMs are allocated to certain physical hosts (servers), and how, depending on traffic loads, etc., the physical clustering can lead to wasting resources within each cluster. In essence, the pool of resources in the data center that could have been shared is now fragmented into different clusters impeding the allocation of resources to vendors/tenants other than the owner/user of the cluster. - There are several existing technologies in the domain of data center resources clustering that attempt to address the above problems. One approach is VMware's VSphere 5.5, which is described in the “vSphere ESXi Vcenter server resource management guide”, available online at http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-551-resource-management-guide.pdf. VSphere allows several types of affinity to be defined:
-
- CPU-affinity: CPU-affinity is used to restrict the assignment of VMs to a subset of available processors in multi-processor systems, i.e., it specifies VM-to-processor placement constraints.
- VM-affinity: VM-to-VM affinity rules are used to specify affinity (or anti-affinity) between individual VMs, i.e., it specifies whether selected individual VMs should run on the same host or be kept on separate physical hosts (servers).
- VM-Host affinity: VM-Host affinity rules specify whether or not the members of a selected VM Distributed Resource Scheduler (DRS) group can run on the members of a specified host DRS group. A VM-Host affinity rule includes the following components: a) one VM DRS group, b) one host DRS group, and c) a designation of whether the rule is a requirement (must) or a preference (should), and whether it expresses affinity (run on) or anti-affinity (not run on).
- The last type of affinity (i.e., VM-Host affinity) is the one used to partition resources in the data center and create resource clusters, as the ones shown in
FIG. 2 . These clusters of specific physical hosts (a host DRS group) can then be assigned to tenants (VM DRS group). However, such creation is made offline (pre-planned) and ahead to receive VM allocation requests, i.e., the allocation has already been designated beforehand. Therefore, it does not solve the issue of resources waste. The data center schema inFIG. 2 and other figures specifies four virtual machines (VMs) per physical server. This configuration has been chosen for illustration and may vary in actual embodiments, i.e. there may be one or more virtual machines per physical server. In addition to the drawback of pre-planning and allocation of resource clusters outlined above, the specification and maintenance of affinity rules can be very complex and labor-intensive, especially in data center configurations with a numerous tenants that have diverse workloads with different requirements. - A similar approach based on declaring affinity groups and allocating data items and computing resources based on such affinity groups is described by Peirault et al. in US patent U.S. Pat. No. 8,577,892 B2. The basic idea of this approach is that computing resources can be associated with affinity groups, and such affinity groups may be associated with a geographic region or a number of data centers. Data items and computing resources are associated with affinity groups, and can then be allocated based on such associated affinity group. This solution is very similar to what VMware supports via its DRS clusters.
- Another approach is presented by Ferris et al. in US patent U.S. Pat. No. 8,402,139 B2 wherein a matching system, which could be a cloud management system, collects information about available cloud appliances, which could be physical hosts or servers in the data center, and matches these appliances with user requested services. Such requested services are applications deployed on a number of VMs. The matching system can also track and manage resources, so users can have specific rights and assigned resources are made available to the users. However, U.S. Pat. No. 8,402,139 B2 does not solve the problem of allocating resources based on expressed and required affinities in comparison with other tenant requests.
- Another approach is described by G. Shanamuganathan et al. in “Defragmenting the Cloud Using Demand-based Resource Allocation” at the ACM International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS 2013). The authors propose two algorithms that dynamically allocate the bulk capacity of resources purchased by a customer between its VMs based on their individual demand and other tenant specified controls such as reservation, limits and priority. The proposed Distributed Binary Search (DBS) algorithm matches that of the centralized VMware's DRS resource manager, but working in a distributed environment. And the other proposed algorithm, the Base+Proportional Excess (BPX) is fully asynchronous. The main difference of the solution is that it allows defragmenting the cloud/data center resources, which is related to the technology disclosed herein, but in this case, the allocation is based on VM loads and load priorities of the VMs as expressed in the requests by the customer. Therefore, it does not consider explicit inter-tenant affinity information.
- When resources are allocated based on affinity groups, all aforementioned approaches require the allocation of resource groups beforehand and in a pre-planned manner.
- According to one embodiment, there is provided a method of allocating by a virtualized resource management engine at least one virtual resource to a physical and/or software resource from a plurality of physical and/or software resources comprising the step of:
- obtaining information used to identify a first tenant requesting said at least one virtual resource,
- obtaining affinity information as a parameter of a resource allocation request, specifying whether or not said requested virtual resource may be collocated on the same physical and/or software resource with one or more virtual resources of another tenant different from said first tenant;
- allocating the at least one virtual resource based on the affinity information.
- This method has the effect and advantage that resources in the data center may be allocated without having to pre-plan and/or pre-allocate physical and hypervisor software resources to tenants (vendors). Without such pre-allocation, fewer resources are needed in the data center, as the pool of resources is statistically shared among tenants while at the same time affinity information, expressing constraints on the placement of virtual machines of different tenants, are taken into account when determining the allocation. Therefore, savings in infrastructure resources and capital expenditure are possible, as illustrated in
FIG. 4 , which at the same time also reduce the operational cost due to the reduced amount of required resources. - Allocating of the at least one virtual resource based on said affinity information has the effect and advantage that fragmenting resources by a tenant is avoided. This gives more flexibility to perform such fragmentation based on other parameters or resource capabilities, e.g., the type of resources (if some specific hardware acceleration is available on certain hosts), quality of the resources, resiliency levels, etc.
- In one embodiment, the method includes an intermediate step to obtain information related to the current allocation of virtual resources to the plurality of physical and software resources.
- This has the effect and advantage that resources in the data center may be allocated under consideration of affinity information in a more dynamic way and at run-time.
- In one embodiment, the virtualized resource management engine is part of an entity responsible for the virtualized infrastructure management of virtual, physical, and software resources in a data center or cloud infrastructure.
- This has the effect and advantage that it allows an administrator of the virtualized infrastructure, e.g. the network operator, to decouple the computing, storage, and network resources in the data center from the vendor's implementation of the software to be deployed. In particular, the implementation of the application software does not have to consider how to establish and enforce affinity constraints when being deployed in a specific cloud environment. An entity responsible for the virtualized infrastructure management may, for example, be the VIM in the NFV Architectural Framework.
- In one embodiment, a signaling entity of the information used to identify a first tenant and the affinity information is an entity responsible for the management of virtual network functions.
- This has the effect and advantage that it allows the tenants (vendors), which control the entity responsible for the management of virtual network functions, to decide on a per VNF deployment/operation case how such VNF and the corresponding virtualized resources should be deployed in terms of being or not collocated with other tenant virtualized resources. An entity responsible for the management of virtual network functions may, for example, be the VNFM in the NFV Architectural Framework.
- In one embodiment, the signaling is forwarded through an entity responsible for orchestration of resources and virtual network functions.
- This has the effect and advantage that it allows vendors and network operators to have different VNF provisioning strategies under different situations, being in part determined by the entity that is responsible for the orchestration or resources and virtual network functions, like traffic load in the data center, priority of their VNFs, and/or additional network service and resource policy constraints. An entity that is responsible for the orchestration of resources and virtual network functions may, for example, be the NFVO in the NFV Architectural Framework.
- In one embodiment, the method includes an intermediate step to discover the affinity information based on information received to identify the first tenant.
- This has the effect and advantage in that the affinity information does not have to be specified and transmitted explicitly but is determined based on the identity of the first tenant. This makes the signaling protocol more efficient and it allows an entity other than the tenant (vendor) to determine the specific tenant affinity information.
- In one embodiment, the method includes an intermediate step to discover the affinity information based on information received to identify the first tenant, wherein the signaling entity of the information used to identify the first tenant is an entity responsible for the management of virtual network functions, and wherein the discovery of the information related to the affinity based on the information used to identify the first tenant is performed by an entity responsible for the orchestration of resources and virtual network functions, and wherein signaling of information used to identify the first tenant and the information related to the affinity is performed by an entity responsible for the orchestration of resources and virtual network functions.
- This embodiment achieves several effects and advantages: tenants (vendors), which control the entity responsible for the management of virtual network functions, are allowed to decide on a per VNF deployment/operation case how such VNF and the corresponding virtualized resources should be deployed in terms of being or not collocated with virtualized resources of other tenants, furthermore vendors and network operators may have different VNF provisioning strategies under different situations, like traffic load in the data center, priority of their VNFs, and/or additional network service and resource policy constraints; finally the signaling protocol may be more efficient and it allows an entity other than the tenant (vendor) to determine the specific tenant affinity information.
- In one embodiment, the information related to the affinity is part of a policy and the affinity information is signaled as part of the setup process of the policy.
- This has the effect and advantage that the entity responsible for the management of virtual network functions can directly issue a resource allocation request which only needs to specify the resource requirements and the type or class of the VNF for such a resource allocation. Then, the VIM maps such information with that contained in the policies and issues the resource allocation accordingly. Therefore, the signaling can be more efficient.
- In one embodiment, the process of allocating said at least one virtual resource to a physical and/or software resource from a plurality of physical and/or software resources is part of management operations wherein management operations preferably include the first instantiation of a virtualized deployment, or the full or partial scaling out, migration or healing of virtualized resources of an existing virtualized deployment.
- This has the effect and advantage that the allocation method is effective also in other management operations so that the aforementioned effects and benefits of the allocation method are maintained when the management operations occur.
- In one embodiment, the at least one virtual resource is a virtual machine (VM) to run on a hypervisor or a virtual application container to run on an operating system, or a virtual disk drive for storage.
- This has the effect and advantage that the method is applicable to computer hardware and software systems and infrastructure commonly available in data centers.
- In one embodiment, the allocation of virtual resources is provided for a virtualized deployment, wherein the virtualized deployment is a virtual network function (VNF).
- This has the effect and advantage that the method is applicable to allocating resources for virtual network functions in a telco cloud.
- In one embodiment, the affinity information can take multiple values to cover different allocation cases, preferably including one or more of anti-affinity to specific tenants, affinity to specific tenants, affinity to virtual resources which are compute, storage or network intensive.
- This has the effect and advantage that the affinity information can express affinity or anti-affinity to a certain part or a whole set of vendors, or that affinity information can express affinity or anti-affinity to collocate virtualized resources with certain capabilities.
- According to an embodiment, there is provided an apparatus for allocating by a virtualized resource management engine at least one virtual resource to a physical and/or software resource from a plurality of physical and/or software resources comprising: a module for obtaining affinity information as a parameter of a resource allocation request, specifying whether or not said requested virtual resource may be collocated on the same physical and/or software resource with one or more virtual resources of another tenant different from said first tenant;
- a module designed to allocate the at least one virtual resource based on the affinity information.
- According to another embodiment, the apparatus further comprises a module for obtaining information related to the current allocation of virtual resources to the plurality of physical and software resources.
- According to another embodiment, the apparatus further comprises a module for allocating said at least one virtual resource based on said affinity information.
- The effects and advantages achieved by the embodiments of the apparatus correspond to the effects and advantages of the embodiments of the method which have been described in detail above.
-
FIG. 1 shows an architectural overview over the functional building blocks of an architectural framework for network function virtualization, in particular the ETSI NFV E2E Architectural Framework. -
FIG. 2 shows a conceptual schema of a data center and an example of the physical clustering problem. -
FIG. 3 illustrates an example of the tenant-affinity use and its effect when using it together with VM-affinity rules. -
FIG. 4 shows a conceptual scheme of a data center according to the prior art based on DRS clusters and a conceptual scheme of a data center with allocations according to one embodiment based on inter-tenant affinity values. -
FIG. 5 shows an embodiment of the method to allocate virtualized resources based on tenant-affinity information, where the inter tenant-affinity value is “1”. -
FIG. 6 shows an embodiment of the method to allocate virtualized resources based on tenant-affinity information, where the inter tenant-affinity value is “0”. -
FIG. 7 shows the signaling and information flow between functional blocks of the NFV architecture framework when performing the first exemplary embodiment of the method to allocate virtualized resources based on tenant-affinity information. -
FIG. 8 shows the signaling and information flow between functional blocks of the NFV architecture framework when performing the second exemplary embodiment of the method to allocate virtualized resources based on tenant-affinity information. -
FIG. 9 shows the signaling and information flow between functional blocks of the NFV architecture framework when performing the third exemplary embodiment of the method to allocate virtualized resources based on tenant-affinity information. -
FIG. 10 shows the signaling and information flow between functional blocks of the NFV architecture framework when performing the fourth exemplary embodiment of the method to allocate virtualized resources based on tenant-affinity information. - At first, some terms used in the description will be defined in the following list of abbreviations.
-
- CMS Cloud Management System
- DRS Distributed Resource Scheduler
- NFV Network Functions Virtualization
- NFVI NFV Infrastructure
- NFVO NFV Orchestrator
- VIM Virtual Infrastructure Manager
- VNF Virtual Network Function
- VNFM VNF Manager
- The specifications for NFV are being driven by an Industry Specification Group (ISG) in the European Telecommunications Standards Institute (ETSI). ETSI NFV has defined an NFV Architectural Framework, which focuses on the new functional blocks and reference points brought by the virtualization of an operator's network. An overview of the NFV Architectural Framework is shown in
FIG. 1 . - The NFV Architectural Framework describes the functional blocks and the reference points in between such functional blocks. The split of functionalities and the declared reference points support the management and orchestration of
VNFs 101 in a multi-vendor ecosystem. Specifically, the framework provides the required split of functionalities to ensure that the VNF software can be decoupled from the underlying infrastructure. In this scenario, VNF vendors and implementers become actual tenants on using the infrastructure likely managed by another entity, like for instance a mobile network operator. This infrastructure is composed of computing, storage and network resources placed in one or several data centers. The infrastructure is also meant to be shared: by using virtualization techniques, several VMs can be allocated and run on a single physical server. - Throughout the description, the term “vendor” and the term “tenant” will be used interchangeably.
- The technology disclosed herein mainly deals with the following functional blocks of the NFV Architectural Framework which are shown in
FIG. 1 : -
- The NFV Orchestrator (NFVO) 102, which is in charge of the orchestration and management of NFV Infrastructure and software resources and realizes network services on NFVI. A network service is realized by a collection of one or multiple VNFs.
- VNF Manager (VNFM) 103, responsible for the VNF lifecycle management (e.g. instantiation, update, query, scaling, termination of VNFs).
- The Virtualized Infrastructure Manager (VIM) 104, which controls and manages the NFVI compute, storage and network resources. In the case of a cloud-based NFVI, the VIM can be implemented as a Cloud Management System (CMS).
- The NFV Infrastructure (NFVI) 105, which comprises the set of compute, storage and network resources over which virtualized resources are allocated.
- While the NFVO provides global resource allocation, the VNFMs can interact directly with the VIM (e.g. CMS) to request management of virtualized resources as part of the deployment and management of VNFs. An example for such an interaction is a capacity extension for a deployed VNF: this extension can consist of the VNFM requesting additional VMs from the CMS that are then added to the VNF.
- The teachings of the present disclosure tackle the following problem in the context of the NFV Architectural Framework: Given a multi-vendor VNF scenario, with VNFs coming from different vendors, each with their particular resource requirements, how can one ensure that physical clustering of resources can be avoided, thus guaranteeing better statistical gains on sharing resources among different vendors?
- In the following, embodiments of the invention will be described.
- The technology disclosed herein is based on declaring explicit affinity rules based on tenant/vendor information. By declaring such information, the virtualized resource manager engine (part of a Cloud Management System, or of a VIM) can then allocate virtualized resources (e.g., VMs) as part of a virtualized deployment (e.g., VNF) without having to pre-plan in advance the partitioning of physical and software resources in the data center.
- The solution focuses on:
-
- The parameters about tenant-affinity on interfaces involving functional blocks of the NFV Architectural Framework.
- The method for the resource allocation based on tenant affinity information.
- The tenant-affinity parameter, which is referred to in the claims as affinity information, gives information whether the virtualized resources requested by the tenant (vendor) can or cannot be collocated on the same physical and/or software resources with other virtualized resources from other tenants (vendors). The tenant-affinity is a parameter, which is different from other affinity parameters known in the state of the art, e.g., those described in the background section as offered by VMware's DRS.
FIG. 3 explains and illustrates the tenant affinity parameter according to an embodiment and how the use of the tenant-affinity information is complementary to VM-affinity information (i.e. VM-to-VM affinity as defined above) as used in the state of the art. This VM-affinity information refers only to affinity among selected VMs, not servers. Here VM-affinity=0 means that the new selected VMs cannot be allocated on the same server, while VM-affinity=1 would mean that the selected VMs have to be allocated on the same server. In this example, virtual resources (VMs) C3 and C4 of tenant “C” are selected VMs having VM-affinity=0 and are allocated onServers Server 1 and Server 2). However, when “tenant-affinity=1” information is additionally considered, this means that each of the new VMs of a tenant has to be allocated to a server, where only VMs of this tenant are allocated or where no VMs are allocated yet. Therefore, as shown on the right-hand side ofFIG. 3 , the allocation is performed onServers 2 and 3 (case 302) which are only used by the same tenant C (referring to Server 2) or which were not used at all and thus are fully available (referring to Server 3). - It will be understood by the skilled person that while
FIG. 3 shows an example with only two selected VMs, the same principle can also be applied to more than two selected VMs, which are to be allocated. -
FIG. 4 compares the prior art (based on DRS clusters) and the technology according to the present disclosure. The left hand side of the figure shows the above described “physical clustering” approach. The right hand side shows an example of using the tenant-affinity parameter. In this case, an affinity value of “1” means that resources cannot be shared among tenants, thus virtualized resources with this value can only share servers with other virtualized resources from the same tenant. Besides, there also exist virtualized resources that have an affinity value of “0”, which means that they can share servers with virtualized resources from other tenants. - Several embodiments that use this basic idea are possible, and some of them are detailed in the following.
- This technology described herein and specified in the claims has the following advantages:
-
- It allows the resources in the data center to be allocated in a more dynamic way, and at run-time without having to pre-plan and pre-allocate physical and software resources to tenants (vendors). Fewer resources are needed in the data center, as the pool of resources is statistically shared among tenants. Therefore, savings on resources and capital expenditure are possible as shown in
FIG. 4 , which means that at the same time operational costs can be reduced due to the reduced amount of required resources. - It avoids fragmenting resources by tenant. This gives more flexibility to perform such fragmentation based on other parameters or resource capabilities, e.g., the type of resources (if some specific hardware acceleration is available on certain hosts), quality of the resources, resiliency levels, etc.
- It allows the administrator of the virtualized infrastructure (in our case the network operator) to decouple the computing, storage, and network resources in the data center from the vendor's implementation of the VNF software.
- It allows the tenants (vendors) to decide on a per VNF deployment/operation case how such VNF and the corresponding virtualized resources should be deployed in terms of being or not collocated with other tenant virtualized resources. This allows vendors to have different VNF provisioning strategy under different situations, like traffic load in the data center, priority of their VNFs, and/or additional network service and resource policy constraints.
- It allows the resources in the data center to be allocated in a more dynamic way, and at run-time without having to pre-plan and pre-allocate physical and software resources to tenants (vendors). Fewer resources are needed in the data center, as the pool of resources is statistically shared among tenants. Therefore, savings on resources and capital expenditure are possible as shown in
- In the following, the method to allocate virtualized resources based on tenant-affinity information is described.
-
FIG. 5 illustrates an example where the inter-tenant-affinity value is “1”. This means that VMs cannot be collocated with VMs from other tenants. -
FIG. 6 illustrates an example where the inter-tenant-affinity value is “0”. In such an example, the request is that VMs can be allocated and collocated with VMs from other tenants. - The method illustrated in
FIGS. 5 and 6 comprises the following four main steps: - 1. Step 1 (S501): a request to allocate one or more than one virtualized resources (for simplicity, it is assumed that such resources are virtual machines, VM) is performed. Such a request includes information (parameters) that identify the tenant issuing such a request and the tenant-affinity value per virtualized resource.
- 2. Step 2 (S502): a virtualized
resource management engine 511 collects the input information from the request ofstep 1. Furthermore, it may collect additional information (either stored or retrieved from another entity) about the current placement of virtualized resources on the pool of shared physical and software resources in the data center. This additional information contains for each identified physical host (identified by a “server-id” parameter 521) at least the following information elements in the table shown with the examples inFIGS. 5 and 6 : -
- a. the used-affinity 522 (how the current host/server is used); this corresponds to the tenant-affinity parameter signaled in
step 1, but here identifies whether the current physical host can be shared with other tenants (used-affinity “0”) or is tenant exclusive (used-affinity “1”) - b. the tenant or tenants using such a resource (used-tenant-id 523), and
- c. the identifiers of the virtualized resources (e.g., VMs) (vm-id 524): a list of virtual machines that are instantiated on this physical host
- a. the used-affinity 522 (how the current host/server is used); this corresponds to the tenant-affinity parameter signaled in
- 3. Step 3 (S503): the virtualized resource management engine finds a physical host (server) using the resource and tenant-affinity requirements.
- 4. Step 4 (S504): the virtualized resource management engine issues an allocation request to the hypervisor or virtual machine manager for the selected servers/hosts to allocate the virtualized resources (e.g., VMs) in the data center (cloud infrastructure) 512.
- Several embodiments are possible based on who issues and processes the request with the tenant-affinity information, and how such information is processed, for instance,
-
- based on actual resource requests on interfaces, or
- by mapping resource requests to the actual reservation of resources (note: in this case, the reservation is logical, e.g., number of vCPUs, virtual memory, etc.; that is, more in terms of quotas rather than physical host reservation), or
- by setting up and using policies.
-
FIGS. 7 to 10 illustrate exemplary information flows flow between functional blocks of the NFV architecture framework when performing the exemplary embodiments of the method to allocate virtualized resources based on tenant-affinity information, which are described in the following asembodiment 1 toembodiment 4 respectively. - Other possible embodiments, which are referred to as embodiments A to embodiment D, include making use of the invention during resource management operations such as scaling-out a virtualized deployment (e.g., a VNF), or during partial or full migration of virtualized resources, or during partial or full healing of a virtualized deployment. Also, the tenant-affinity parameter could be extended not only to hold one of the binary values of “0” and “1” that have been used as example up to now, but rather to hold a value from a set with more than two values. All these embodiments are summarized and explained in the following sections.
- The first set of
embodiments 1 to 4 covers the usage of the invention during the resource allocation request procedure: -
Embodiment 1 is the main and basic embodiment that has been used as example throughout the above text. Here, the resource allocation request includes in addition to existing parameters (like the specific resource requirements, and possibly reservation information) the identification of the tenant (tenant-id) and the tenant-affinity per virtualized resource requested as presented in this disclosure. In this case, the resource request is made by a VNFM and issued against the VIM as in step S701. The mapping of the tenant-affinity and handling such a requirement during the selection of resources is realized by the VIM. The sequence of steps and the signaling between functional blocks according toembodiment 1 is illustrated inFIG. 7 . -
Embodiment 2 is another embodiment which also aims at the signaling of the tenant affinity and the tenant-id as part of the allocation request, however in this case it is made indirectly through the NFVO (as shown in steps S801 and S802) instead of directly between VNFM and VIM as outlined inembodiment 1. The tenant-affinity information is still signaled by the VNFM. During this process, the NFVO can also map the resource request by the VNFM to a particular reservation. The sequence of steps and the signaling between functional blocks according toembodiment 2 is illustrated inFIG. 8 . -
Embodiment 3 differs fromembodiment 2 in that not all information needs to be signaled from the VNFM. Part of the information is rather derived by the NFVO which maps the tenant-id from the resource request from the VNFM. The NFVO here keeps internal information that allows it to derive the tenant affinity information. The NFVO can also map the resource request by the VNFM to a particular reservation. Then the NFVO can proceed with signaling the resource allocation request to the VIM (as in step S902) similarly toembodiment 2. The sequence of steps and the signaling between functional blocks according toembodiment 3 is illustrated inFIG. 9 . - In this case, the signaling of the tenant-affinity information is part of a policy creation process. In this exemplary case, which is illustrated in
FIG. 10 , the NFVO is the issuer of the “create policy”, and the VIM is the entity keeping such a policy. Such a policy creation request (step S1001) contains information about the tenant identifier (tenant-id), the tenant-affinity parameter and the class or list of classes of VNFs ([vnf-class]) from the tenant that should follow such affinity placement requirement. The parameter notation uses square brackets “[” “]” to indicate that one or a list of values may be specified. The VIM stores such information which can be used later on to take allocation decisions. Once the policy is created, another third element, e.g., the VNFM can directly issue a resource allocation request (step S1003) which only needs to specify the resource requirements and the type or class of the VNF (vnf-class) for such a resource allocation. Then, the VIM maps such information with that contained in the policies and determines the resource allocation accordingly. - The second set of embodiments relate to different types of resource operations like scaling the capacity of a VNF, or partially or fully migrating virtual machines of a VNF from one physical host to another for which such tenant-affinity can be used, or partially or fully healing a VNF. These embodiments are thus orthogonal to the first set of embodiments: The first set describes different ways to implement the signaling procedure to support tenant-affinity related information being passed through different functional blocks within the NFV Architecture Framework; the second set describes different operations on the virtualized resources that can be supported. Hence the features of embodiments from both sets of embodiments may be combined.
- Embodiment A uses the tenant-affinity information as part of an actual virtualized resource allocation request during the new instantiation process of a VNF (virtualized deployment). This is the example that has been used in this description so far.
- In embodiment B, it is assumed that the VNF should be scaled-out, e.g. by adding more virtual machines to this VNF. This scale-out procedure thus also requires the allocation of new resources and tenant-affinity information is used to ensure proper instantiation of such resources. In such a case, new virtualized resources may be requested as part of such VNF, or expansion on the existing ones, for example, allocation of more vCPUs or virtual memory to an existing virtualized resource (VM). Note that also a scale-in procedure, in which the capacity of a VNF is reduced, might need tenant-affinity information. Examples are the case that the VIM wants to decide which VM to remove first, or for the case that a VM in the wake of resource consolidation after scale-in should be migrated (as described in the following embodiment C).
- Embodiment C assumes a migration scenario, i.e. either the complete VNF or parts of it are to be migrated to different servers within or among datacenters. This is feasible with standard virtual machine migration technologies as commonly used in datacenters. Here, the tenant-affinity information is used to determine to which servers the VMs of a VNF can or cannot be migrated.
- Embodiment D covers virtualized resource healing (failure recovery) of the VNF, either for the complete VNF or for parts of it. An example here is the failure of certain VMs of a VNF that then need to be redeployed on new servers. Also in this case, the tenant-affinity information is used to determine suitable candidate servers for such a re-deployment.
- Finally, in a third set of embodiments I to III, the possible values of the tenant-affinity parameter are varied. Either they are binary as described up to now, or they take different values from a pre-defined value set.
- In embodiment I, the tenant-affinity parameter is a binary value that determines if virtualized resources can be collocated with virtualized resources from other tenants or not: If the parameter is equal to “0”, the virtualized resources can be collocated on shared physical and software resources in the data center with other virtualized resources from other tenants; whereas if this parameter is equal to “1”, the virtualized resources cannot be collocated with those from other tenants. This is the embodiment that has been described in the above text.
- In embodiment II, the tenant-affinity parameter can take values from a value set, wherein the different values denote information to affinity or anti-affinity to a certain part of or a whole set of tenants (vendors). For instance,
-
- tenant-affinity=A: affinity to any tenant (vendor) of group A.
- tenant-affinity=B: affinity to any tenant (vendor) different than tenant-id=X.
- tenant-affinity=C: affinity to tenant-id=Z, but not to tenant-id=Y.
- etc.
- In embodiment III, the tenant-affinity parameter can take values from a value set with more than two values, wherein the different values denote information to affinity or anti-affinity to collocated virtualized resources with certain capabilities. For instance,
-
- tenant-affinity=M: anti-affinity to compute intensive VMs from other tenants.
- tenant-affinity=N: anti-affinity to intensive input/output data VMs from other tenants.
- etc.
- It will be readily apparent to the skilled person that the methods, the elements, units and apparatuses described in connection with embodiments of the invention may be implemented in hardware, in software, or as a combination of both. In particular it will be appreciated that the embodiments of the invention and the elements of modules described in connection therewith may be implemented by a computer program or computer programs running on a computer or being executed by a microprocessor. Any apparatus implementing the invention may in particular take the form of a computing device acting as a network entity.
Claims (14)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP14200452.2A EP3040860A1 (en) | 2014-12-29 | 2014-12-29 | Resource management in cloud systems |
EP14200452.2 | 2014-12-29 | ||
PCT/EP2015/081333 WO2016107862A1 (en) | 2014-12-29 | 2015-12-29 | Resource management in cloud systems |
Publications (1)
Publication Number | Publication Date |
---|---|
US20170371717A1 true US20170371717A1 (en) | 2017-12-28 |
Family
ID=52272943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/540,436 Abandoned US20170371717A1 (en) | 2014-12-29 | 2015-12-29 | Resource management in cloud systems |
Country Status (5)
Country | Link |
---|---|
US (1) | US20170371717A1 (en) |
EP (1) | EP3040860A1 (en) |
JP (1) | JP6435050B2 (en) |
CN (1) | CN107113192A (en) |
WO (1) | WO2016107862A1 (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190036783A1 (en) * | 2016-04-08 | 2019-01-31 | Huawei Technologies Co., Ltd. | Management Method and Apparatus |
US10200463B2 (en) * | 2017-05-22 | 2019-02-05 | At&T Intellectual Property I, L.P. | Systems and methods to improve the performance of a network by more efficient virtual network resource allocation |
US10243878B2 (en) * | 2016-06-16 | 2019-03-26 | Cisco Technology, Inc. | Fog computing network resource partitioning |
US20200106834A1 (en) * | 2018-09-28 | 2020-04-02 | Hewlett Packard Enterprise Development Lp | Mapping virtual network functions |
WO2020125698A1 (en) * | 2018-12-21 | 2020-06-25 | 华为技术有限公司 | Resource object management method and apparatus |
US10728132B2 (en) * | 2017-06-01 | 2020-07-28 | Hewlett Packard Enterprise Development Lp | Network affinity index increase |
US10938900B1 (en) * | 2015-12-18 | 2021-03-02 | EMC IP Holding Company LLC | Software defined storage defragmentation |
US11057306B2 (en) * | 2019-03-14 | 2021-07-06 | Intel Corporation | Traffic overload protection of virtual network functions |
US20210326168A1 (en) * | 2018-02-26 | 2021-10-21 | Amazon Technologies, Inc. | Autonomous cell-based control plane for scalable virtualized computing |
US20210377185A1 (en) * | 2020-05-29 | 2021-12-02 | Equinix, Inc. | Tenant-driven dynamic resource allocation for virtual network functions |
US11297622B1 (en) | 2018-06-25 | 2022-04-05 | At&T Intellectual Property I, L.P. | Dynamic hierarchical reserved resource allocation |
US11327780B2 (en) * | 2018-09-18 | 2022-05-10 | Vmware, Inc. | Network-efficient isolation environment redistribution |
US20220188211A1 (en) * | 2020-12-10 | 2022-06-16 | Amazon Technologies, Inc. | Managing computing capacity in radio-based networks |
US11431572B2 (en) | 2019-03-14 | 2022-08-30 | Telefonaktiebolaget Lm Ericsson (Publ) | Semantic detection and resolution of conflicts and redundancies in network function virtualization policies |
US11507408B1 (en) * | 2020-01-21 | 2022-11-22 | Amazon Technologies, Inc. | Locked virtual machines for high availability workloads |
US11601348B2 (en) | 2020-12-10 | 2023-03-07 | Amazon Technologies, Inc. | Managing radio-based private networks |
US11627472B2 (en) | 2020-12-10 | 2023-04-11 | Amazon Technologies, Inc. | Automated deployment of radio-based networks |
US20230164188A1 (en) * | 2021-11-22 | 2023-05-25 | Nutanix, Inc. | System and method for scheduling virtual machines based on security policy |
US11711727B1 (en) | 2021-03-16 | 2023-07-25 | Amazon Technologies, Inc. | Provisioning radio-based networks on demand |
US11729091B2 (en) | 2020-12-10 | 2023-08-15 | Amazon Technologies, Inc. | Highly available data-processing network functions for radio-based networks |
US11743953B2 (en) | 2021-05-26 | 2023-08-29 | Amazon Technologies, Inc. | Distributed user plane functions for radio-based networks |
WO2023191830A1 (en) * | 2022-04-01 | 2023-10-05 | Altiostar Networks, Inc. | Scaling subscriber handling capacity and throughput in a cloud native radio access network |
US11797324B2 (en) | 2020-03-23 | 2023-10-24 | Fujitsu Limited | Status display method and storage medium |
US11838273B2 (en) | 2021-03-29 | 2023-12-05 | Amazon Technologies, Inc. | Extending cloud-based virtual private networks to radio-based networks |
US11895508B1 (en) | 2021-03-18 | 2024-02-06 | Amazon Technologies, Inc. | Demand-based allocation of ephemeral radio-based network resources |
EP4455879A1 (en) * | 2023-04-07 | 2024-10-30 | VMware LLC | Methods and apparatus to manage cloud computing resources |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10642896B2 (en) | 2016-02-05 | 2020-05-05 | Sas Institute Inc. | Handling of data sets during execution of task routines of multiple languages |
US10650045B2 (en) | 2016-02-05 | 2020-05-12 | Sas Institute Inc. | Staged training of neural networks for improved time series prediction performance |
US10650046B2 (en) | 2016-02-05 | 2020-05-12 | Sas Institute Inc. | Many task computing with distributed file system |
US10795935B2 (en) | 2016-02-05 | 2020-10-06 | Sas Institute Inc. | Automated generation of job flow definitions |
US11042417B2 (en) | 2016-08-10 | 2021-06-22 | Nec Corporation | Method for managing computational resources of a data center using a single performance metric for management decisions |
CN106648462B (en) * | 2016-11-21 | 2019-10-25 | 华为技术有限公司 | Date storage method and device |
CN108123924B (en) * | 2016-11-30 | 2021-02-12 | 中兴通讯股份有限公司 | Resource management method and system |
CN108234536A (en) * | 2016-12-14 | 2018-06-29 | 中国电信股份有限公司 | Virtual resource allocation method and cloud pipe platform |
USD898059S1 (en) | 2017-02-06 | 2020-10-06 | Sas Institute Inc. | Display screen or portion thereof with graphical user interface |
USD898060S1 (en) | 2017-06-05 | 2020-10-06 | Sas Institute Inc. | Display screen or portion thereof with graphical user interface |
US10768963B2 (en) | 2017-07-31 | 2020-09-08 | Hewlett Packard Enterprise Development Lp | Virtual network functions allocation in a datacenter based on extinction factor |
CN107766001B (en) * | 2017-10-18 | 2021-05-25 | 成都索贝数码科技股份有限公司 | Storage quota method based on user group |
CN110098946B (en) * | 2018-01-31 | 2021-09-03 | 华为技术有限公司 | Method and device for deploying virtualized network element equipment |
CN109460298A (en) * | 2018-11-01 | 2019-03-12 | 云宏信息科技股份有限公司 | A kind of data processing method and device |
CN109783196B (en) * | 2019-01-17 | 2021-03-12 | 新华三信息安全技术有限公司 | Virtual machine migration method and device |
CN113918268A (en) * | 2020-07-07 | 2022-01-11 | 华为技术有限公司 | Multi-tenant management method and device |
CN115967712A (en) * | 2021-05-12 | 2023-04-14 | 华为云计算技术有限公司 | Cloud service deployment method of cloud platform and related equipment thereof |
KR102613365B1 (en) * | 2021-12-09 | 2023-12-13 | 국민대학교산학협력단 | Apparatus and method for determining ai-based cloud service server |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010205209A (en) * | 2009-03-06 | 2010-09-16 | Hitachi Ltd | Management computer, computer system, and physical resource allocating method |
US9712402B2 (en) * | 2012-10-10 | 2017-07-18 | Alcatel Lucent | Method and apparatus for automated deployment of geographically distributed applications within a cloud |
US9621425B2 (en) * | 2013-03-27 | 2017-04-11 | Telefonaktiebolaget L M Ericsson | Method and system to allocate bandwidth for heterogeneous bandwidth request in cloud computing networks |
CN203219314U (en) * | 2013-04-15 | 2013-09-25 | 四川省电力公司信息通信公司 | Cloud data center resource pool management control system |
CN103412792B (en) * | 2013-07-18 | 2015-06-10 | 成都国科海博信息技术股份有限公司 | Dynamic task scheduling method and device under cloud computing platform environment |
CN103593229B (en) * | 2013-11-26 | 2016-06-15 | 西安工程大学 | Integrated and United Dispatching framework and the dispatching method of isomery cloud operating system |
CN104142864A (en) * | 2014-08-07 | 2014-11-12 | 浪潮电子信息产业股份有限公司 | Multi-tenant performance isolation framework based on virtualization technology |
-
2014
- 2014-12-29 EP EP14200452.2A patent/EP3040860A1/en not_active Withdrawn
-
2015
- 2015-12-29 CN CN201580071573.6A patent/CN107113192A/en active Pending
- 2015-12-29 US US15/540,436 patent/US20170371717A1/en not_active Abandoned
- 2015-12-29 JP JP2017529726A patent/JP6435050B2/en active Active
- 2015-12-29 WO PCT/EP2015/081333 patent/WO2016107862A1/en active Application Filing
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10938900B1 (en) * | 2015-12-18 | 2021-03-02 | EMC IP Holding Company LLC | Software defined storage defragmentation |
US20220210019A1 (en) * | 2016-04-08 | 2022-06-30 | Huawei Technologies Co., Ltd. | Management Method and Apparatus |
US11296945B2 (en) * | 2016-04-08 | 2022-04-05 | Huawei Technologies Co., Ltd. | Management method and apparatus |
US20190036783A1 (en) * | 2016-04-08 | 2019-01-31 | Huawei Technologies Co., Ltd. | Management Method and Apparatus |
US10243878B2 (en) * | 2016-06-16 | 2019-03-26 | Cisco Technology, Inc. | Fog computing network resource partitioning |
US20190141122A1 (en) * | 2017-05-22 | 2019-05-09 | At&T Intellectual Property I, L.P. | Systems and methods to improve the performance of a network by more efficient virtual network resource allocation |
US10749944B2 (en) * | 2017-05-22 | 2020-08-18 | Shopify Inc. | Systems and methods to improve the performance of a network by more efficient virtual network resource allocation |
US10200463B2 (en) * | 2017-05-22 | 2019-02-05 | At&T Intellectual Property I, L.P. | Systems and methods to improve the performance of a network by more efficient virtual network resource allocation |
US10728132B2 (en) * | 2017-06-01 | 2020-07-28 | Hewlett Packard Enterprise Development Lp | Network affinity index increase |
US20210326168A1 (en) * | 2018-02-26 | 2021-10-21 | Amazon Technologies, Inc. | Autonomous cell-based control plane for scalable virtualized computing |
US11297622B1 (en) | 2018-06-25 | 2022-04-05 | At&T Intellectual Property I, L.P. | Dynamic hierarchical reserved resource allocation |
US20220244982A1 (en) * | 2018-09-18 | 2022-08-04 | Vmware, Inc. | Network-efficient isolation environment redistribution |
US11847485B2 (en) * | 2018-09-18 | 2023-12-19 | Vmware, Inc. | Network-efficient isolation environment redistribution |
US11327780B2 (en) * | 2018-09-18 | 2022-05-10 | Vmware, Inc. | Network-efficient isolation environment redistribution |
US10958730B2 (en) * | 2018-09-28 | 2021-03-23 | Hewlett Packard Enterprise Development Lp | Mapping virtual network functions |
US20200106834A1 (en) * | 2018-09-28 | 2020-04-02 | Hewlett Packard Enterprise Development Lp | Mapping virtual network functions |
WO2020125698A1 (en) * | 2018-12-21 | 2020-06-25 | 华为技术有限公司 | Resource object management method and apparatus |
US11057306B2 (en) * | 2019-03-14 | 2021-07-06 | Intel Corporation | Traffic overload protection of virtual network functions |
US11431572B2 (en) | 2019-03-14 | 2022-08-30 | Telefonaktiebolaget Lm Ericsson (Publ) | Semantic detection and resolution of conflicts and redundancies in network function virtualization policies |
US11507408B1 (en) * | 2020-01-21 | 2022-11-22 | Amazon Technologies, Inc. | Locked virtual machines for high availability workloads |
US11797324B2 (en) | 2020-03-23 | 2023-10-24 | Fujitsu Limited | Status display method and storage medium |
US20230231817A1 (en) * | 2020-05-29 | 2023-07-20 | Equinix, Inc. | Tenant-driven dynamic resource allocation for virtual network functions |
US11611517B2 (en) * | 2020-05-29 | 2023-03-21 | Equinix, Inc. | Tenant-driven dynamic resource allocation for virtual network functions |
US20210377185A1 (en) * | 2020-05-29 | 2021-12-02 | Equinix, Inc. | Tenant-driven dynamic resource allocation for virtual network functions |
US11729091B2 (en) | 2020-12-10 | 2023-08-15 | Amazon Technologies, Inc. | Highly available data-processing network functions for radio-based networks |
US11601348B2 (en) | 2020-12-10 | 2023-03-07 | Amazon Technologies, Inc. | Managing radio-based private networks |
US11627472B2 (en) | 2020-12-10 | 2023-04-11 | Amazon Technologies, Inc. | Automated deployment of radio-based networks |
US20220188211A1 (en) * | 2020-12-10 | 2022-06-16 | Amazon Technologies, Inc. | Managing computing capacity in radio-based networks |
US11886315B2 (en) * | 2020-12-10 | 2024-01-30 | Amazon Technologies, Inc. | Managing computing capacity in radio-based networks |
US11711727B1 (en) | 2021-03-16 | 2023-07-25 | Amazon Technologies, Inc. | Provisioning radio-based networks on demand |
US11895508B1 (en) | 2021-03-18 | 2024-02-06 | Amazon Technologies, Inc. | Demand-based allocation of ephemeral radio-based network resources |
US11838273B2 (en) | 2021-03-29 | 2023-12-05 | Amazon Technologies, Inc. | Extending cloud-based virtual private networks to radio-based networks |
US11743953B2 (en) | 2021-05-26 | 2023-08-29 | Amazon Technologies, Inc. | Distributed user plane functions for radio-based networks |
US20230164188A1 (en) * | 2021-11-22 | 2023-05-25 | Nutanix, Inc. | System and method for scheduling virtual machines based on security policy |
WO2023191830A1 (en) * | 2022-04-01 | 2023-10-05 | Altiostar Networks, Inc. | Scaling subscriber handling capacity and throughput in a cloud native radio access network |
EP4455879A1 (en) * | 2023-04-07 | 2024-10-30 | VMware LLC | Methods and apparatus to manage cloud computing resources |
Also Published As
Publication number | Publication date |
---|---|
JP2018503897A (en) | 2018-02-08 |
WO2016107862A1 (en) | 2016-07-07 |
JP6435050B2 (en) | 2018-12-05 |
CN107113192A (en) | 2017-08-29 |
EP3040860A1 (en) | 2016-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20170371717A1 (en) | Resource management in cloud systems | |
US10635496B2 (en) | Thread pool management | |
US10988793B2 (en) | Cloud management with power management support | |
US20210083949A1 (en) | System and method to support network slicing in an mec system providing automatic conflict resolution arising from multiple tenancy in the mec environment | |
US8862720B2 (en) | Flexible cloud management including external clouds | |
EP3285439B1 (en) | Network service lifecycle management method and device | |
US10102018B2 (en) | Introspective application reporting to facilitate virtual machine movement between cloud hosts | |
US8863138B2 (en) | Application service performance in cloud computing | |
EP2585910B1 (en) | Methods and systems for planning application deployment | |
CN107222531B (en) | Container cloud resource scheduling method | |
US10628228B1 (en) | Tiered usage limits across compute resource partitions | |
US20210110506A1 (en) | Dynamic kernel slicing for vgpu sharing in serverless computing systems | |
CN110741352B (en) | Virtual network function management system, virtual network function management method and computer readable storage device | |
RU2690198C1 (en) | Method of processing network functions virtualization resources and virtualised network function manager | |
WO2011162746A1 (en) | A method and system for determining a deployment of applications | |
EP3442201B1 (en) | Cloud platform construction method and cloud platform | |
US11726816B2 (en) | Scheduling workloads on a common set of resources by multiple schedulers operating independently | |
US20190004844A1 (en) | Cloud platform construction method and cloud platform | |
KR20140044597A (en) | Apparatus and method for processing task | |
US20150286508A1 (en) | Transparently routing job submissions between disparate environments | |
US10025626B2 (en) | Routing job submissions between disparate compute environments | |
CN110347473B (en) | Method and device for distributing virtual machines of virtualized network elements distributed across data centers | |
US11689411B1 (en) | Hardware resource management for management appliances running on a shared cluster of hosts | |
US20240370310A1 (en) | Resource sharing in an orchestrated environment | |
US11403130B2 (en) | Method and apparatus for workload volatility management in cloud computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NTT DOCOMO, INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIESS, WOLFGANG;MARQUES, JOAN TRIAY;LUCA, XUELI AN-DE;AND OTHERS;SIGNING DATES FROM 20170817 TO 20170828;REEL/FRAME:043741/0030 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |