US20230136615A1 - Virtual pools and resources using distributed networked processing units - Google Patents
Virtual pools and resources using distributed networked processing units Download PDFInfo
- Publication number
- US20230136615A1 US20230136615A1 US18/090,701 US202218090701A US2023136615A1 US 20230136615 A1 US20230136615 A1 US 20230136615A1 US 202218090701 A US202218090701 A US 202218090701A US 2023136615 A1 US2023136615 A1 US 2023136615A1
- Authority
- US
- United States
- Prior art keywords
- computing system
- resource
- request
- network
- ipu
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 141
- 238000003860 storage Methods 0.000 claims description 42
- 238000000034 method Methods 0.000 claims description 32
- 230000008569 process Effects 0.000 claims description 6
- 238000013459 approach Methods 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 32
- 230000015654 memory Effects 0.000 description 24
- 238000004891 communication Methods 0.000 description 17
- 230000001133 acceleration Effects 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 12
- 230000006855 networking Effects 0.000 description 12
- 230000003863 physical function Effects 0.000 description 8
- 230000007246 mechanism Effects 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 230000006978 adaptation Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 241001522296 Erithacus rubecula Species 0.000 description 1
- 101000741965 Homo sapiens Inactive tyrosine-protein kinase PRAG1 Proteins 0.000 description 1
- 102100038659 Inactive tyrosine-protein kinase PRAG1 Human genes 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000001152 differential interference contrast microscopy Methods 0.000 description 1
- 230000003292 diminished effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005291 magnetic effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0889—Techniques to speed-up the configuration process
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
- G06F11/0709—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0793—Remedial or corrective actions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0844—Multiple simultaneous or quasi-simultaneous cache accessing
- G06F12/0846—Cache with multiple tag or data arrays being simultaneously accessible
- G06F12/0851—Cache with interleaved addressing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0873—Mapping of cache memory to specific storage devices or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/3005—Arrangements for executing specific machine instructions to perform operations for flow control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5066—Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5094—Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5019—Ensuring fulfilment of SLA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0876—Network architectures or network communication protocols for network security for authentication of entities based on the identity of the terminal or configuration, e.g. MAC address, hardware or software configuration or device fingerprint
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/12—Applying verification of the received information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1087—Peer-to-peer [P2P] networks using cross-functional networking aspects
- H04L67/1091—Interfacing with client-server systems or between P2P systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- Embodiments described herein generally relate to data processing, network communication, and communication system implementations of distributed computing, including the implementations with the use of networked processing units such as infrastructure processing units (IPUs) or data processing units (DPUs).
- networked processing units such as infrastructure processing units (IPUs) or data processing units (DPUs).
- IPUs infrastructure processing units
- DPUs data processing units
- Deployments are moving to highly distributed multi-edge and multi-tenant deployments. Deployments may have different limitations in terms of power and space. Deployments also may use different types of compute, acceleration, and storage technologies in order to overcome these power and space limitations. Deployments also are typically interconnected in tiered and/or peer-to-peer fashion, in an attempt to create a network of connected devices and edge appliances that work together.
- Edge computing at a general level, has been described as systems that provide the transition of compute and storage resources closer to endpoint devices at the edge of a network (e.g., consumer computing devices, user equipment, etc.). As compute and storage resources are moved closer to endpoint devices, a variety of advantages have been promised such as reduced application latency, improved service capabilities, improved compliance with security or data privacy requirements, improved backhaul bandwidth, improved energy consumption, and reduced cost. However, many deployments of edge computing technologies—especially complex deployments for use by multiple tenants—have not been fully adopted.
- FIG. 1 illustrates an overview of a distributed edge computing environment, according to an example
- FIG. 2 depicts computing hardware provided among respective deployment tiers in a distributed edge computing environment, according to an example
- FIG. 3 depicts additional characteristics of respective deployments tiers in a distributed edge computing environment, according to an example
- FIG. 4 depicts a computing system architecture including a compute platform and a network processing platform provided by an infrastructure processing unit, according to an example
- FIG. 5 depicts an infrastructure processing unit arrangement operating as a distributed network processing platform within network and data center edge settings, according to an example
- FIG. 6 depicts functional components of an infrastructure processing unit and related services, according to an example
- FIG. 7 depicts a block diagram of example components in an edge computing system which implements a distributed network processing platform, according to an example
- FIG. 8 depicts an arrangement of distributed processing provided at an edge computing network layer, according to an example
- FIG. 9 depicts a distributed computing environment enabled for operation of virtual pools, according to an example.
- FIG. 10 depicts a flowchart of a method for operating a virtual pool of resources at a host computing system, for a virtual pool that is accessible across distributed computing entities, according to an example.
- IPUs infrastructure processing units
- Vpools Virtual Pools
- a virtual pool may be created between an IPU and a network infrastructure device to access physical functions and resources at a compute node that hosts the IPU.
- Such physical functions and resources may be accessed by the IPU using an interconnect such as Compute Express Link (CXL), to access and pool any number of disaggregated resources at the compute node.
- CXL provides an interconnect with access to many types of local devices (e.g., GPUs, memory, storage, accelerators, etc.), including resource instances that can be accessed by different tenants and servers.
- Another aspect discussed herein relates to the creation of a virtual pool to access virtual resources from the IPU.
- FIG. 1 is a block diagram 100 showing an overview of a distributed edge computing environment, which may be adapted for implementing the present techniques for distributed networked processing units.
- the edge cloud 110 is established from processing operations among one or more edge locations, such as a satellite vehicle 141 , a base station 142 , a network access point 143 , an on premise server 144 , a network gateway 145 , or similar networked devices and equipment instances. These processing operations may be coordinated by one or more edge computing platforms 120 or systems that operate networked processing units (e.g., IPUs, DPUs) as discussed herein.
- networked processing units e.g., IPUs, DPUs
- the edge cloud 110 is generally defined as involving compute that is located closer to endpoints 160 (e.g., consumer and producer data sources) than the cloud 130 , such as autonomous vehicles 161 , user equipment 162 , business and industrial equipment 163 , video capture devices 164 , drones 165 , smart cities and building devices 166 , sensors and IoT devices 167 , etc.
- Compute, memory, network, and storage resources that are offered at the entities in the edge cloud 110 can provide ultra-low or improved latency response times for services and functions used by the endpoint data sources as well as reduce network backhaul traffic from the edge cloud 110 toward cloud 130 thus improving energy consumption and overall network usages among other benefits.
- Compute, memory, and storage are scarce resources, and generally decrease depending on the edge location (e.g., fewer processing resources being available at consumer end point devices than at a base station or a central office data center).
- edge computing attempts to minimize the number of resources needed for network services, through the distribution of more resources that are located closer both geographically and in terms of in-network access time.
- FIG. 2 depicts examples of computing hardware provided among respective deployment tiers in a distributed edge computing environment.
- one tier at an on-premise edge system is an intelligent sensor or gateway tier 210 , which operates network devices with low power and entry-level processors and low-power accelerators.
- Another tier at an on-premise edge system is an intelligent edge tier 220 , which operates edge nodes with higher power limitations and may include a high-performance storage.
- a network edge tier 230 operates servers including form factors optimized for extreme conditions (e.g., outdoors).
- a data center edge tier 240 operates additional types of edge nodes such as servers, and includes increasingly powerful or capable hardware and storage technologies.
- a core data center tier 250 and a public cloud tier 260 operate compute equipment with the highest power consumption and largest configuration of processors, acceleration, storage/memory devices, and highest throughput network.
- tiers various forms of Intel® processor lines are depicted for purposes of illustration; it will be understood that other brands and manufacturers of hardware will be used in real-world deployments. Additionally, it will be understood that additional features or functions may exist among multiple tiers.
- One such example is connectivity and infrastructure management that enable a distributed IPU architecture, that can potentially extend across all of tiers 210 , 220 , 230 , 240 , 250 , 260 .
- Other relevant functions that may extend across multiple tiers may relate to security features, domain or group functions, and the like.
- FIG. 3 depicts additional characteristics of respective deployment tiers in a distributed edge computing environment, based on the tiers discussed with reference to FIG. 2 .
- This figure depicts additional network latencies at each of the tiers 210 , 220 , 230 , 240 , 250 , 260 , and the gradual increase in latency in the network as the compute is located at a longer distance from the edge endpoints. Additionally, this figure depicts additional power and form factor constraints, use cases, and key performance indicators (KPIs).
- KPIs key performance indicators
- edge computing within the edge cloud 110 may provide the ability to serve and respond to multiple applications of the use cases in real-time or near real-time and meet ultra-low latency requirements.
- networking has become one of the fundamental pieces of the architecture that allow achieving scale with resiliency, security, and reliability.
- Networking technologies have evolved to provide more capabilities beyond pure network routing capabilities, including to coordinate quality of service, security, multi-tenancy, and the like. This has also been accelerated by the development of new smart network adapter cards and other type of network derivatives that incorporated capabilities such as ASICs (application-specific integrated circuits) or FPGAs (field programmable gate arrays) to accelerate some of those functionalities (e.g., remote attestation).
- ASICs application-specific integrated circuits
- FPGAs field programmable gate arrays
- networked processing units have begun to be deployed at network cards (e.g., smart NICs), gateways, and the like, which allow direct processing of network workloads and operations.
- a networked processing unit is an infrastructure processing unit (IPU), which is a programmable network device that can be extended to provide compute capabilities with far richer functionalities beyond pure networking functions.
- IPU infrastructure processing unit
- DPU data processing unit
- the following discussion refers to functionality applicable to an IPU configuration, such as that provided by an Intel® line of IPU processors. However, it will be understood that functionality will be equally applicable to DPUs and other types of networked processing units provided by ARM®, Nvidia®, and other hardware OEMs.
- FIG. 4 depicts an example compute system architecture that includes a compute platform 420 and a network processing platform comprising an IPU 410 .
- the main compute platform 420 is composed by typical elements that are included with a computing node, such as one or more CPUs 424 that may or may not be connected via a coherent domain (e.g., via Ultra Path Interconnect (UPI) or another processor interconnect); one or more memory units 425 ; one or more additional discrete devices 426 such as storage devices, discrete acceleration cards (e.g., a field-programmable gate array (FPGA), a visual processing unit (VPU), etc.); a baseboard management controller 421 ; and the like.
- the compute platform 420 may operate one or more containers 422 (e.g., with one or more microservices), within a container runtime 423 (e.g., Docker containerd).
- the IPU 410 operates as a networking interface and is connected to the compute platform 420 using an interconnect (e.g., using either PCIe or CXL).
- the IPU 410 in this context, can be observed as another small compute device that has its own: (1) Processing cores (e.g., provided by low-power cores 417 ), (2) operating system (OS) and cloud native platform 414 to operate one or more containers 415 and a container runtime 416 ; (3) Acceleration functions provided by an ASIC 411 or FPGA 412 ; (4) Memory 418 ; (5) Network functions provided by network circuitry 413 ; etc.
- the IPU 410 is seen as a discrete device from the local host (e.g., the OS running in the compute platform CPUs 424 ) that is available to provide certain functionalities (networking, acceleration etc.). Those functionalities are typically provided via Physical or Virtual PCIe functions. Additionally, the IPU 410 is seen as a host (with its own IP etc.) that can be accessed by the infrastructure to setup an OS, run services, and the like. The IPU 410 sees all the traffic going to the compute platform 420 and can perform actions—such as intercepting the data or performing some transformation—as long as the correct security credentials are hosted to decrypt the traffic.
- the local host e.g., the OS running in the compute platform CPUs 424
- Those functionalities are typically provided via Physical or Virtual PCIe functions.
- the IPU 410 is seen as a host (with its own IP etc.) that can be accessed by the infrastructure to setup an OS, run services, and the like.
- the IPU 410 sees all the traffic going to the
- OSI model Open Systems Interconnection model
- processing may be performed at the transport layer only.
- the IPU may be able to intercept traffic at the traffic layer (e.g., intercept CDN traffic and process it locally).
- IPUs and similar networked processing units include: to accelerate network processing; to manage hosts (e.g., in a data center); or to implement quality of service policies.
- hosts e.g., in a data center
- quality of service policies e.g., to implement quality of service policies.
- most of functionalities today are focused at using the IPU at the local appliance level and within a single system. These approaches do not address how the IPUs could work together in a distributed fashion or how system functionalities can be divided among the IPUs on other parts of the system. Accordingly, the following introduces enhanced approaches for enabling and controlling distributed functionality among multiple networked processing units. This enables the extension of current IPU functionalities to work as a distributed set of IPUs that can work together to achieve stronger features such as, resiliency, reliability, etc.
- FIG. 5 depicts an IPU arrangement operating as a distributed network processing platform within network and data center edge settings.
- workloads or processing requests are directly provided to an IPU platform, such as directly to IPU 514 .
- workloads or processing requests are provided to some intermediate processing device 512 , such as a gateway or NUC (next unit of computing) device form factor, and the intermediate processing device 512 forwards the workloads or processing requests to the IPU 514 .
- some intermediate processing device 512 such as a gateway or NUC (next unit of computing) device form factor
- the IPU 514 directly receives data from use cases 502 A.
- the IPU 514 operates one or more containers with microservices to perform processing of the data.
- a small gateway e.g., a NUC type of appliance
- the IPU 514 may process data as a small aggregator of sensors that runs on the far edge, or may perform some level of inline or preprocessing and that sends payload to be further processed by the IPU or the system that the IPU connects.
- the intermediate processing device 512 provided by the gateway or NUC receives data from use cases 502 B.
- the intermediate processing device 512 includes various processing elements (e.g., CPU cores, GPUs), and may operate one or more microservices for servicing workloads from the use cases 502 B.
- the intermediate processing device 512 invokes the IPU 514 to complete processing of the data.
- the IPU 514 may connect with a local compute platform, such as that provided by a CPU 516 (e.g., Intel® Xeon CPU) operating multiple microservices.
- the IPU may also connect with a remote compute platform, such as that provided at a data center by CPU 540 at a remote server.
- a microservice that performs some analytical processing (e.g., face detection on image data), where the CPU 516 and the CPU 540 provide access to this same microservice.
- the IPU 514 depending on the current load of the CPU 516 and the CPU 540 , may decide to forward the images or payload to one of the two CPUs. Data forwarding or processing can also depend on other factors such as SLA for latency or performance metrics (e.g., perf/watt) in the two systems.
- the distributed IPU architecture may accomplish features of load balancing.
- the IPU in the computing environment 510 may be coordinated with other network-connected IPUs.
- a Service and Infrastructure orchestration manager 530 may use multiple IPUs as a mechanism to implement advanced service processing schemes for the user stacks. This may also enable implementing of system functionalities such as failover, load balancing etc.
- IPUs can be arranged in the following non-limiting configurations.
- a particular IPU e.g., IPU 514
- IPUs e.g., IPU 520
- an IPU can be configured to forward traffic to service replicas that runs on other systems when a local host does not respond.
- a particular IPU e.g., IPU 514
- IPU 520 can work with other IPUs to perform load balancing across other systems. For example, consider a scenario where CDN traffic targeted to the local host is forwarded to another host in case that I/O or compute in the local host is scarce at a given moment.
- a particular IPU e.g., IPU 514
- IPU 514 can work as a power management entity to implement advanced system policies. For example, consider a scenario where the whole system (e.g., including CPU 516 ) is placed in a C6 state (a low-power/power-down state available to a processor) while forwarding traffic to other systems (e.g., IPU 520 ) and consolidating it.
- C6 state a low-power/power-down state available to a processor
- FIG. 6 depicts functional components of an IPU 610 , including services and features to implement the distributed functionality discussed herein. It will be understood that some or all of the functional components provided in FIG. 6 may be distributed among multiple IPUs, hardware components, or platforms, depending on the particular configuration and use case involved.
- a number of functional components are operated to manage requests for a service running in the IPU (or running in the local host).
- IPUs can either run services or intercept requests arriving to services running in the local host and perform some action. In the latter case, the IPU can perform the following types of actions/functions (provided as a non-limiting examples).
- each IPU is provided with Peer Discovery logic to discover other IPUs in the distributed system that can work together with it.
- Peer Discovery logic may use mechanisms such as broadcasting to discover other IPUs that are available on a network.
- the Peer Discovery logic is also responsible to work with the Peer Attestation and Authentication logic to validate and authenticate the peer IPU's identity, determine whether they are trustworthy, and whether the current system tenant allows the current IPU to work with them.
- an IPU may perform operations such as: retrieve a proof of identity and proof of attestation; connect to a trusted service running in a trusted server; or, validate that the discovered system is trustworthy.
- Various technologies including hardware components or standardized software implementations that enable attestation, authentication, and security may be used with such operations.
- each IPU provides interfaces to other IPUs to enable attestation of the IPU itself.
- IPU Attestation logic is used to perform an attestation flow within a local IPU in order to create the proof of identity that will be shared with other IPUs. Attestation here may integrate previous approaches and technologies to attest a compute platform. This may also involve the use of trusted attestation service 640 to perform the attestation operations.
- a particular IPU includes capabilities to discover the functionalities that peer IPUs provide. Once the authentication is done, the IPU can determine what functionalities that the peer IPUs provide (using the IPU Peer Discovery Logic) and store a record of such functionality locally. Examples of properties to discover can include: (i) Type of IPU and functionalities provided and associated KPIs (e.g.
- enclaves e.g., enclaves provided by Intel® SGX or TDX technologies
- Current services that are running on the IPU and on the system that can potentially accept requests forwarded from this IPU; or
- Other interfaces or hooks that are provided by an IPU, such as: Access to remote storage; Access to a remote VPU; Access to certain functions.
- service may be described by properties such as: UUID; Estimated performance KPIs in the host or IPU; Average performance provided by the system during the N units of time (or any other type of indicator); and like properties.
- the IPU includes functionality to manage services that are running either on the host compute platform or in the IPU itself.
- Managing (orchestration) services includes performance service and resource orchestration for the services that can run on the IPU or that the IPU can affect.
- Two type of usage models are envisioned:
- the IPU may enable external orchestrators to deploy services on the IPU compute capabilities.
- an IPU includes a component similar to K8 compatible APIs to manage the containers (services) that run on the IPU itself.
- the IPU may run a service that is just providing content to storage connected to the platform.
- the orchestration entity running in the IPU may manage the services running in the IPU as it happens in other systems (e.g. keeping the service level objectives).
- external orchestrators can be allowed to register to the IPU that services are running on the host may require to broker requests, implement failover mechanisms and other functionalities. For example, an external orchestrator may register that a particular service running on the local compute platform is replicated in another edge node managed by another IPU where requests can be forwarded.
- external orchestrators may provide to the Service/Application Intercept logic the inputs that are needed to intercept traffic for these services (as typically is encrypted). This may include properties such as a source and destination traffic of the traffic to be intercepted, or the key to use to decrypt the traffic. Likewise, this may be needed to terminate TLS to understand the requests that arrive to the IPU and that the other logics may need to parse to take actions. For example, if there is a CDN read request the IPU may need to decrypt the packet to understand that network packet includes a read request and may redirect it to another host based on the content that is being intercepted. Examples of Service/Application Intercept information is depicted in table 620 in FIG. 6 .
- External orchestration can be implemented in multiple topologies.
- One supported topology includes having the orchestrator managing all the IPUs running on the backend public or private cloud.
- Another supported topology includes having the orchestrator managing all the IPUs running in a centralized edge appliance.
- Still another supported topology includes having the orchestrator running in another IPU that is working as the controller or having the orchestrator running distributed in multiple other IPUs that are working as controllers (master/primary node), or in a hierarchical arrangement.
- the IPU may include Service Request Brokering logic and Load Balancing logic to perform brokering actions on arrival for requests of target services running in the local system. For instance, the IPU may decide to see if those requests can be executed by other peer systems (e.g., accessible through Service and Infrastructure Orchestration 630 ). This can be caused, for example, because load in the local systems is high.
- the local IPU may negotiate with other peer IPUs for the possibility to forward the request. Negotiation may involve metrics such as cost. Based on such negotiation metrics, the IPU may decide to forward the request.
- the Service Request Brokering and Load Balancing logic may distribute requests arriving to the local IPU to other peer IPUs.
- the other IPUs and the local IPU work together and do not necessarily need brokering.
- Such logic acts similar to a cloud native sidecar proxy. For instance, requests arriving to the system may be sent to the service X running in the local system (either IPU or compute platform) or forwarded to a peer IPU that has another instance of service X running
- the load balancing distribution can be based on existing algorithms such as based on the systems that have lower load, using round robin, etc.
- the IPU includes Reliability and Failover logic to monitor the status of the services running on the compute platform or the status of the compute platform itself.
- the Reliability and Failover logic may require the Load Balancing logic to transiently or permanently forward requests that aim specific services in situations such as where: i) The compute platform is not responding; ii) The service running inside the compute node is not responding; and iii) The compute platform load prevents the targeted service to provide the right level of service level objectives (SLOs). Note that the logic must know the required SLOs for the services.
- Such functionality may be coordinated with service information 650 including SLO information.
- the IPU may include a workload pipeline execution logic that understands how workloads are composed and manage their execution.
- Workloads can be defined as a graph that connects different microservices.
- the load balancing and brokering logic may be able to understand those graphs and decide what parts of the pipeline are executed where. Further, to perform these and other operations, Intercept logic will also decode what requests are included as part of the requests.
- a distributed network processing configuration may enable IPUs to perform important role for managing resources of edge appliances.
- the functional components of an IPU can operate to perform these and similar types of resource management functionalities.
- an IPU can provide management or access to external resources that are hosted in other locations and expose them as local resources using constructs such as Compute Express Link (CXL).
- CXL Compute Express Link
- the IPU could potentially provide access to a remote accelerator that is hosted in a remote system via CXL.mem/cache and IO.
- Another example includes providing access to remote storage device hosted in another system.
- the local IPU could work with another IPU in the storage system and expose the remote system as PCIE VF/PF (virtual functions/physical functions) to the local host.
- an IPU can provide access to IPU-specific resources.
- Those IPU resource may be physical (such as storage or memory) or virtual (such as a service that provides access to random number generation).
- an IPU can manage local resources that are hosted in the system where it belongs.
- the IPU can manage power of the local compute platform.
- an IPU can provide access to other type of elements that relate to resources (such as telemetry or other types of data).
- resources such as telemetry or other types of data.
- telemetry provides useful data for something that is needed to decide where to execute things or to identify problems.
- the IPU can also include functionality to manage I/O from the system perspective.
- the IPU includes Host Virtualization and XPU Pooling logic responsible to manage the access to resources that are outside the system domain (or within the IPU) and that can be offered to the local compute system.
- XPU refers to any type of a processing unit, whether CPU, GPU, VPU, an acceleration processing unit, etc.
- the IPU logic after discovery and attestation, can agree with other systems to share external resources with the services running in the local system.
- IPUs may advertise to other peers available resources or can be discovered during discovery phase as introduced earlier. IPUs may request to other IPUS to those resources. For example IPU on system A may request access to storage on system B manage by another IPU. Remote and local IPUs can work together to establish a connection between the target resources and the local system.
- resources can be exposed to the services running in the local compute node using the VF/PF PCIE and CXL Logic. Each of those resources can be offered as VF/PF.
- the IPU logic can expose to the local host resources that are hosted in the IPU. Examples of resources to expose may include local accelerators, access to services, and the like.
- Power Management is one of the key features to achieve favorable system operational expenditures (OPEXs). IPU is very well positioned to optimize power consumption that the local system is consuming.
- the Distributed and local power management unit Is responsible to meter the power that the system is consuming, the load that the system is receiving and track the service level agreements that the various services running in the system are achieving for the arriving requests. Likewise, when power efficiencies (e.g., power usage effectiveness (PUE)) are not achieving certain thresholds or the local compute demand is low, the IPU may decide to forward the requests to local services to other IPUs that host replicas of the services.
- PUE power usage effectiveness
- Such power management features may also coordinate with the Brokering and Load Balancing logic discussed above.
- IPUs can work together to decide where requests can be consolidated to establish higher power efficiency as system.
- traffic is redirected, the local power consumption can be reduced in different ways.
- Example operations that can be performed include: changing the system to C6 State; changing the base frequencies; performing other adaptations of the system or system components.
- Telemetry Metrics The IPU can generate multiple types of metrics that can be interesting from services, orchestration or tenants owning the system.
- telemetry can be accessed, including: (i) Out of band via side interfaces; (ii) In band by services running in the IPU; or (iii) Out of band using PCIE or CXL from the host perspective.
- Relevant types of telemetries can include: Platform telemetry; Service Telemetry; IPU telemetry; Traffic telemetry; and the like.
- Remote IPUs accessed via an IP Network, such as within certain latency for data plane offload/storage offloads (or, connected for management/control plane operations); or
- Distributed IPUs providing an interconnected network of IPUs, including as many as hundreds of nodes within a domain.
- Configurations of distributed IPUs working together may also include fragmented distributed IPUs, where each IPU or pooled system provides part of the functionalities, and each IPU becomes a malleable system.
- Configurations of distributed IPUs may also include virtualized IPUs, such as provided by a gateway, switch, or an inline component (e.g., inline between the service acting as IPU), and in some examples, in scenarios where the system has no IPU.
- IPU-to-IPU in the same tier or a close tier
- IPU-to-IPU in the cloud data to compute versus compute to data
- integration in small device form factors e.g., gateway IPUs
- gateway/NUC +IPU which connects to a data center
- multiple GW/NUC e.g. 16
- gateway/NUC+IPU on the server
- GW/NUC and IPU that are connected to a server with an IPU.
- the preceding distributed IPU functionality may be implemented among a variety of types of computing architectures, including one or more gateway nodes, one or more aggregation nodes, or edge or core data centers distributed across layers of the network (e.g., in the arrangements depicted in FIGS. 2 and 3 ). Accordingly, such IPU arrangements may be implemented in an edge computing system by or on behalf of a telecommunication service provider (“telco”, or “TSP”), internet-of-things service provider, cloud service provider (CSP), enterprise entity, or any other number of entities.
- telco telecommunication service provider
- CSP cloud service provider
- Various implementations and configurations of the edge computing system may be provided dynamically, such as when orchestrated to meet service objectives.
- Such edge computing systems may be embodied as a type of device, appliance, computer, or other “thing” capable of communicating with other edge, networking, or endpoint components.
- FIG. 7 depicts a block diagram of example components in a computing device 750 which can operate as a distributed network processing platform.
- the computing device 750 may include any combinations of the components referenced above, implemented as integrated circuits (ICs), as a package or system-on-chip (SoC), or as portions thereof, discrete electronic devices, or other modules, logic, instruction sets, programmable logic or algorithms, hardware, hardware accelerators, software, firmware, or a combination thereof adapted in the computing device 750 , or as components otherwise incorporated within a larger system.
- the computing device 750 may include processing circuitry comprising one or both of a network processing unit 752 (e.g., an IPU or DPU, as discussed above) and a compute processing unit 754 (e.g., a CPU).
- a network processing unit 752 e.g., an IPU or DPU, as discussed above
- a compute processing unit 754 e.g., a CPU
- the network processing unit 752 may provide a networked specialized processing unit such as an IPU, DPU, network processor, or other “xPU” outside of the central processing unit (CPU).
- the processing unit may be embodied as a standalone circuit or circuit package, integrated within an SoC, integrated with networking circuitry (e.g., in a SmartNlC), or integrated with acceleration circuitry, storage devices, or AI or specialized hardware, consistent with the examples above.
- the compute processing unit 754 may provide a processor as a central processing unit (CPU) microprocessor, multi-core processor, multithreaded processor, an ultra-low voltage processor, an embedded processor, or other forms of a special purpose processing unit or specialized processing unit for compute operations.
- CPU central processing unit
- multi-core processor multi-core processor
- multithreaded processor multithreaded processor
- ultra-low voltage processor an ultra-low voltage processor
- embedded processor or other forms of a special purpose processing unit or specialized processing unit for compute operations.
- Either the network processing unit 752 or the compute processing unit 754 may be a part of a system on a chip (SoC) which includes components formed into a single integrated circuit or a single package.
- SoC system on a chip
- the network processing unit 752 or the compute processing unit 754 and accompanying circuitry may be provided in a single socket form factor, multiple socket form factor, or a variety of other formats.
- the processing units 752 , 754 may communicate with a system memory 756 (e.g., random access memory (RAM)) over an interconnect 755 (e.g., a bus).
- the system memory 756 may be embodied as volatile (e.g., dynamic random access memory (DRAM), etc.) memory. Any number of memory devices may be used to provide for a given amount of system memory.
- a storage 758 may also couple to the processor 752 via the interconnect 755 to provide for persistent storage of information such as data, applications, operating systems, and so forth.
- the storage 758 may be implemented as non-volatile storage such as a solid-state disk drive (SSD).
- SSD solid-state disk drive
- the components may communicate over the interconnect 755 .
- the interconnect 755 may include any number of technologies, including industry-standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), Compute Express Link (CXL), or any number of other technologies.
- ISA industry-standard architecture
- EISA extended ISA
- PCI peripheral component interconnect
- PCIx peripheral component interconnect extended
- PCIe PCI express
- CXL Compute Express Link
- the interconnect 755 may couple the processing units 752 , 754 to a transceiver 766 , for communications with connected edge devices 762 .
- the transceiver 766 may use any number of frequencies and protocols.
- a wireless local area network (WLAN) unit may implement Wi-Fi® communications in accordance with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, or a wireless wide area network (WWAN) unit may implement wireless wide area communications according to a cellular, mobile network, or other wireless wide area protocol.
- the wireless network transceiver 766 (or multiple transceivers) may communicate using multiple standards or radios for communications at a different range.
- a wireless network transceiver 766 e.g., a radio transceiver
- the communication circuitry may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., a cellular networking protocol such a 3GPP 4G or 5G standard, a wireless local area network protocol such as IEEE 802.11/Wi-Fi®, a wireless wide area network protocol, Ethernet, Bluetooth®, Bluetooth Low Energy, an IoT protocol such as IEEE 802.15.4 or ZigBee®, Matter®, low-power wide-area network (LPWAN) or low-power wide-area (LPWA) protocols, etc.) to effect such communication.
- a cellular networking protocol such as 3GPP 4G or 5G standard
- a wireless local area network protocol such as IEEE 802.11/Wi-Fi®
- a wireless wide area network protocol such as IEEE 802.11/Wi-Fi®
- Ethernet a wireless wide area network protocol
- Bluetooth® Bluetooth Low Energy
- IoT protocol such as IEEE 802.15.4 or ZigBee®
- Matter® low-power wide-area network (LPWAN) or
- applicable communications circuitry used by the device may include or be embodied by any one or more of components 766 , 768 , or 770 . Accordingly, in various examples, applicable means for communicating (e.g., receiving, transmitting, etc.) may be embodied by such communications circuitry.
- the computing device 750 may include or be coupled to acceleration circuitry 764 , which may be embodied by one or more AI accelerators, a neural compute stick, neuromorphic hardware, an FPGA, an arrangement of GPUs, one or more SoCs, one or more CPUs, one or more digital signal processors, dedicated ASICs, or other forms of specialized processors or circuitry designed to accomplish one or more specialized tasks.
- These tasks may include AI processing (including machine learning, training, inferencing, and classification operations), visual data processing, network data processing, object detection, rule analysis, or the like. Accordingly, in various examples, applicable means for acceleration may be embodied by such acceleration circuitry.
- the interconnect 755 may couple the processing units 752 , 754 to a sensor hub or external interface 770 that is used to connect additional devices or subsystems.
- the devices may include sensors 772 , such as accelerometers, level sensors, flow sensors, optical light sensors, camera sensors, temperature sensors, global navigation system (e.g., GPS) sensors, pressure sensors, pressure sensors, and the like.
- the hub or interface 770 further may be used to connect the edge computing node 750 to actuators 774 , such as power switches, valve actuators, an audible sound generator, a visual warning device, and the like.
- various input/output (I/O) devices may be present within or connected to, the edge computing node 750 .
- a display or other output device 784 may be included to show information, such as sensor readings or actuator position.
- An input device 786 such as a touch screen or keypad may be included to accept input.
- An output device 784 may include any number of forms of audio or visual display, including simple visual outputs such as LEDs or more complex outputs such as display screens (e.g., LCD screens), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the edge computing node 750 .
- a battery 776 may power the edge computing node 750 , although, in examples in which the edge computing node 750 is mounted in a fixed location, it may have a power supply coupled to an electrical grid, or the battery may be used as a backup or for temporary capabilities.
- a battery monitor/charger 778 may be included in the edge computing node 750 to track the state of charge (SoCh) of the battery 776 .
- the battery monitor/charger 778 may be used to monitor other parameters of the battery 776 to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery 776 .
- a power block 780 or other power supply coupled to a grid, may be coupled with the battery monitor/charger 778 to charge the battery 776 .
- the instructions 782 on the processing units 752 , 754 may configure execution or operation of a trusted execution environment (TEE) 790 .
- TEE trusted execution environment
- the TEE 790 operates as a protected area accessible to the processing units 752 , 754 for secure execution of instructions and secure access to data.
- Other aspects of security hardening, hardware roots-of-trust, and trusted or protected operations may be implemented in the edge computing node 750 through the TEE 790 and the processing units 752 , 754 .
- the computing device 750 may be a server, appliance computing devices, and/or any other type of computing device with the various form factors discussed above.
- the computing device 750 may be provided by an appliance computing device that is a self-contained electronic device including a housing, a chassis, a case, or a shell.
- the instructions 782 provided via the memory 756 , the storage 758 , or the processing units 752 , 754 may be embodied as a non-transitory, machine-readable medium 760 including code to direct the processor 752 to perform electronic operations in the edge computing node 750 .
- the processing units 752 , 754 may access the non-transitory, machine-readable medium 760 over the interconnect 755 .
- the non-transitory, machine-readable medium 760 may be embodied by devices described for the storage 758 or may include specific storage units such as optical disks, flash drives, or any number of other hardware devices.
- the non-transitory, machine-readable medium 760 may include instructions to direct the processing units 752 , 754 to perform a specific sequence or flow of actions, for example, as described with respect to the flowchart(s) and block diagram(s) of operations and functionality discussed herein.
- the terms “machine-readable medium”, “machine-readable storage”, “computer-readable storage”, and “computer-readable medium” are interchangeable.
- a machine-readable medium also includes any tangible medium that is capable of storing, encoding, or carrying instructions for execution by a machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions.
- a “machine-readable medium” thus may include but is not limited to, solid-state memories, and optical and magnetic media.
- the instructions embodied by a machine-readable medium may further be transmitted or received over a communications network using a transmission medium via a network interface device utilizing any one of a number of transfer protocols (e.g., HTTP).
- a machine-readable medium may be provided by a storage device or other apparatus which is capable of hosting data in a non-transitory format.
- information stored or otherwise provided on a machine-readable medium may be representative of instructions, such as instructions themselves or a format from which the instructions may be derived.
- This format from which the instructions may be derived may include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., split into multiple packages), or the like.
- the information representative of the instructions in the machine-readable medium may be processed by processing circuitry into the instructions to implement any of the operations discussed herein.
- deriving the instructions from the information may include: compiling (e.g., from source code, object code, etc.), interpreting, loading, organizing (e.g., dynamically or statically linking), encoding, decoding, encrypting, unencrypting, packaging, unpackaging, or otherwise manipulating the information into the instructions.
- the derivation of the instructions may include assembly, compilation, or interpretation of the information (e.g., by the processing circuitry) to create the instructions from some intermediate or preprocessed format provided by the machine-readable medium.
- the information when provided in multiple parts, may be combined, unpacked, and modified to create the instructions.
- the information may be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers.
- a software distribution platform (e.g., one or more servers and one or more storage devices) may be used to distribute software, such as the example instructions discussed above, to one or more devices, such as example processor platform(s) and/or example connected edge devices noted above.
- the example software distribution platform may be implemented by any computer server, data facility, cloud service, etc., capable of storing and transmitting software to other computing devices.
- the providing entity is a developer, a seller, and/or a licensor of software
- the receiving entity may be consumers, users, retailers, OEMs, etc., that purchase and/or license the software for use and/or re-sale and/or sub-licensing.
- the instructions are stored on storage devices of the software distribution platform in a particular format.
- a format of computer readable instructions includes, but is not limited to a particular code language (e.g., Java, JavaScript, Python, C, C#, SQL, HTML, etc.), and/or a particular code state (e.g., uncompiled code (e.g., ASCII), interpreted code, linked code, executable code (e.g., a binary), etc.).
- the computer readable instructions stored in the software distribution platform are in a first format when transmitted to an example processor platform(s).
- the first format is an executable binary in which particular types of the processor platform(s) can execute.
- the first format is uncompiled code that requires one or more preparation tasks to transform the first format to a second format to enable execution on the example processor platform(s).
- the receiving processor platform(s) may need to compile the computer readable instructions in the first format to generate executable code in a second format that is capable of being executed on the processor platform(s).
- the first format is interpreted code that, upon reaching the processor platform(s), is interpreted by an interpreter to facilitate execution of instructions.
- Compute requirements for portable devices vary widely based on the use case of the respective device.
- some of the primary constraints that determine a given configuration for a computing system are power consumption (and battery life), form factor, and monetary or resource cost.
- power consumption and battery life
- form factor and monetary or resource cost.
- Fan-less designs may be possible and smaller chassis may be sufficient to enable usage of lighter, more portable form factors.
- these lightweight form factors may not be usable at the edge without access to more-powerful distributed computing resources.
- a deployment may include hundreds of edge devices with different resource utilizations and different acceleration capabilities. Improvements in device capabilities have generally increased connectivity speeds between these devices due to improvements to a variety of wireless and fiber/wired connections in networks and between devices. A consequence of increased connectivity speeds is that it is now possible to expect inter-device connectivity speeds that surpass a gigabyte per second. In addition, capabilities have become available to hot-plug components within systems with technologies such as CXL or PCIE. Whereas with prior approaches where all system components had to be recognized and pre-determined at boot time, newer systems provide increasing capability to add and expand system resources during run-time.
- hot-plugging capabilities are further enhanced by technologies such as memory pooling in the microprocessor architectures, where remote memory can be mapped, on-demand, to the local address map via CXL.
- IPU devices and infrastructure devices perform coordinated operations to expose resources and devices within a virtual pool.
- This virtual pool provides a fabric between different devices and edge platforms with defined levels of quality of service, and with mechanisms to allow the infrastructure to verify that resources and devices/platforms that share devices are trustworthy.
- verification and quality of service features may be particularly complex and unable to operate in existing computing arrangements. For example, consider a small device attempting to access a GPU hosted in a base station, even though the small device does not have enough capabilities to perform attestation because of compute limitations or even security constructs to connect to the right system.
- FIG. 8 depicts an example arrangement of distributed processing provided at an edge computing network layer, using a distributed infrastructure processing unit mesh network. Specifically, FIG. 8 depicts computing operations coordinated among a user layer 810 , an edge layer 820 , and a cloud layer 830 . Consistent with the examples discussed above (e.g., with reference to FIGS. 1 to 3 ), edge computing operations may be performed at the edge layer 820 based on requests from client devices or consumers at the user layer 810 , such as from one or more heterogenous networks 812 , a vehicular network 814 , a machine-to-machine (M2M) or device-to-device (D2D) network (not shown), or other network arrangements.
- the edge layer 820 may further invoke a cloud layer 830 and cloud services 832 to perform further data processing or data retrieval (e.g., at one or more remote data centers or offices).
- a variety of disaggregated resources available in the edge layer 820 may be combined, pooled, or coordinated in order to perform tasks for clients and other consumers. For instance, resources at a first base station 822 A, including compute resources 842 A, may be coordinated with the compute resources 842 B at a second base station 822 B. Other types of resources, not shown, may include communication resources, storage and caching resources, and the like, provided among a variety of devices or nodes. The resources may be arranged into compute pools, memory pools, or storage pools, coordinated via various interconnects and network protocols.
- a distributed IPU mesh network 840 enables a variety of coordinated and distributed workload processing operations, including the creation and coordination of virtual pools as discussed herein.
- a first IPU e.g., IPU 844 A
- a second node or system e.g., compute resources 842 B at base station 822
- a pool of resources and services may also be created and coordinated among multiple locations, based on communications between a first and a second IPU.
- workloads, workload tasks, processing operations, and other related concepts may be distributed across the IPU mesh network 840 based on the performance characteristics and coordination properties referenced herein.
- the arrangement of resource requests and servicing in the distributed IPU mesh network 840 from the resource pools discussed below may enable the use of a one-to-one (peer-to-peer) connections, one-to-many connections, or many-to-many connections.
- the following also addresses techniques for discovery and attestation of resources. Such techniques can ensure a correct level of end-to-end connectivity between different IPUs to enable a virtual pool.
- IPUs provide access to local devices that can be exposed externally.
- An IPU can monitor different resources that are within the local system which hosts the IPU, the IPU can identify local resources that are not being used, and the IPU can also take the resources and physical functions and remove them.
- An IPU further may monitor functions across system, or even perform power management. This monitoring and re-configuration may also be performed on-demand For instance, whenever another device requires access to the physical functions, an IPU can expose the physical functions to the rest of the infrastructure.
- the following refers to uses of telemetry by the IPUs and related logic to enable a robust use of virtual pools. When creating connectivity between two systems, the telemetry that is occurring between the systems can be accessed and evaluated by the IPUs.
- FIG. 9 depicts an example distributed computing environment enabled for operation of virtual pools.
- This environment includes an Infrastructure Device 920 which is paired with an IPU 930 at a compute entity.
- the IPU 930 is able to access various physical and virtual resources via one or more interconnects, such as via a CXL root complex 940 .
- the CXL root complex 940 is used to connect with CXL devices 942 A, 942 B via physical functions (PFs), and CPUs or Devices 944 via local or remote virtual functions (VFs).
- PFs physical functions
- VFs local or remote virtual functions
- multiple functions are added to enable coordination of virtual pools and virtual devices.
- the architecture is also expanded to provide logic features that that allow the edge devices to discover and expose resources to other peers and allow creation and maintenance of a virtual pool of resources (referred to in the following paragraphs as a Vpool).
- the logic features are implemented on at least one network Infrastructure Device 920 such as switches or gateways, connected to at least one IPU 930 operating at a device/node (not shown).
- multiple network infrastructure devices e.g., multiple networking entities
- the logic as implemented on the network Infrastructure Device 920 may be provided by an IPU or other special programmed unit or circuitry at the network Infrastructure Device 920 .
- System Telemetry Logic 921 collects telemetry data in the network for traffic management, quality of service management, and discovery, using the logic features discussed in the following paragraphs.
- the Infrastructure Device 920 includes Vpool Discovery and Advertisement Logic 923 .
- the Discovery and Advertising Logic 923 may be used to discover what resources are exposed among the various edge nodes or devices that host resources. Every time that a new device or node is identified, the Vpool Discovery and Advertisement Logic 923 will determine if the new device/node has an IPU that is enabled with resource sharing (e.g., an IPU configured with the functionality similar to IPU 930 ). If this new device/node has an IPU enabled with resource sharing, the Vpool Discovery and Advertisement Logic 923 will add the device/node to a list of devices/nodes supporting resource pooling. Resources may be added to the corresponding Vpool when they become available.
- resource sharing e.g., an IPU configured with the functionality similar to IPU 930
- the IPU 930 may reach out to the Vpool Discovery and Advertising Logic 923 to notify that a certain type of resource is available in a certain quantity, and potentially during a given time. Resources also may be removed at different times.
- Other aspects of peer discovery and the operation of peer discovery logic can be used to establish a relationship or association between a first IPU and a second IPU.
- Vpool Quality Attestation Logic 925 may be used to verify the trustworthiness of a resource (e.g., a newly introduced resource), such as by contacting an Attestation Server 910 to validate that the IPU 930 and its associated device/node is trustworthy. If attestation cannot be performed, then the device/node resources will be not added to a Vpool.
- a resource e.g., a newly introduced resource
- the Vpool VLAN Traffic Logic 922 may be used to match requests coming from the clients that need specific resources for a certain amount of time and certain quality of service.
- a given IPU on behalf of the client device/node, not shown in FIG. 9 ) may invoke this logic (e.g., after discovering a resource via the Vpool Discovery and Advertising Logic 923 ) to access to a certain resource type during a given time with a given quality of service.
- the Vpool VLAN Traffic Logic 922 also can identify the resources that are within the Vpool type that are capable to provide the required quality of service.
- a response (e.g., a NACK) is returned.
- the logic at the Infrastructure Device 920 will cause operations to (i) create a secure VLAN between the client device/node and the resource that is exposed by the target (host) device/node; (ii) communicate with both IPUs to start the use of the target resource; (iii) the target device/node IPU will connect the VF/PF of the device or the resource that is being shared (e.g.
- memory or core to the client IPU; and (iv) the client device/node IPU will expose a new VF/PF or memory or compute to the host or device.
- memory or compute resources may be accessed via CXL.mem or CXL.cache.
- the Vpool Quality of Service Logic 924 may be used to manage infrastructure networking QoS, e.g., to ensure that the end-to-end networking bandwidth and latency is proper.
- the IPU on both sides e.g., an IPU at the client device/node which consumes resources, and the IPU 930 at the hosting device/node which serves resources
- the Vpool Quality of Service Logic 924 may be used to estimate (or predict) the capability for the virtual resource pool to meet the QoS requirement.
- the Vpool Quality of Service Logic 924 may use intelligence to identify a current load, current work queues, and other usage information, to estimate or identify if the request can be handled in a manner that meets the QoS requirement. If the request cannot be handled while meeting the QoS requirement, then the response can be declined by the infrastructure.
- the Vpool Quality of Service Logic 924 may also evaluate the work queues and the pending workloads, current workloads, or forecasted workloads in the virtual resource pool.
- the IPU 930 (e.g., an IPU containing the functionality of FIG. 7 ) located at the hosting device/node may be configured with corresponding functionality to achieve the following functions.
- the Remote Vpool Mapping 931 provides a function responsible to exposing remote resources as local resources either via VF/PF or expanded compute resources available in the IPU 930 (not shown in FIG. 9 , but consistent with the compute resources discussed above with reference to FIGS. 4 to 6 ).
- the Local Resource Remote Exposure Logic 932 manages the local resources that are exposed into the remote devices or platforms. This logic will be responsible to provide the correct level of proof of identity that can be used by the infrastructure to validate and attest the local resources and devices.
- the Discovery and Advertisement Logic 933 works with the host platform (device/node) to determine what resources are going to be exposed and shared with the vLANs in the various pools.
- the Attestation Logic 934 may be responsible to validate the remote resources (if needed). In many examples, such attestation may be performed by the Infrastructure Device 920 . However, in some examples, an
- IPU located at the hosting device/node can also perform attestation operations.
- the QoS Resource Logic 935 is responsible to perform resource shaping to monitor the quality of service that was requested by the origin (the client device/node).
- the IPU 930 and Infrastructure Device 920 may coordinate with a remote attestation entity provided by a trusted Attestation Server 910 .
- the Attestation Server 910 provides a mechanism to attest devices/nodes and potentially resources.
- the Attestation Server 910 uses a Trusted Entities and Personas Database 911 to store the certificates and data that can be used to attest.
- the Attestation Server 910 also operates Attestation Logic 912 to perform the attestation operations. Other data or operations may be performed or invoked at the Attestation Server 910 .
- Switche switches or other multi-layered switches also may be used to coordinate the volume compute, storage and memory infrastructure to which the Switch-connected-IPU bridges.
- extensible switches can perform fast/slow triage of the resource operations discussed above (e.g., to determine whether a mesh function is to be performed by a Switch-based logic or vectored to attached compute).
- extensible compute/memory/storage services can provide for a backup soft-switch function to shunt the switch out of the flow when the switch needs to be serviced or when the switch is being reinitialized and reintegrated into trust zones (e.g., following a firmware upgrade).
- a service mesh is expected to be transparent by design, its provisioning remains entirely flexible, and the use of an IPU as a bridge provides extremely low latency since the mesh services are only an IPU hop, as opposed to a network hop, from the Switch.
- Top-of-Rack (ToR) switches can use a part of the rack itself, while spine switches may be linked to a plurality of independent-failure zones (such as, comprising separately powered systems).
- FIG. 10 depicts a flowchart for an example method of operating a virtual pool of resources at a host computing system, for a virtual pool that is accessible across distributed computing entities.
- the method 1000 may be implemented by one or more networked processing units (e.g., IPUs) or other forms of processing circuitry, and instructions embodied thereon to be executed by the networked processing unit(s) (or processing circuitry), consistent with the examples and functionality of networked processing units, as discussed above.
- networked processing unit e.g., IPUs
- IPUs networked processing unit
- processing circuitry or processing circuitry
- operations are performed to identify and/or determine availability of one or more resource(s) (e.g., at least one physical resource or virtual resource) at a host computing system.
- the identification operations may be performed by collecting or querying data relating to the status and state of the resources available to the host computing system, and the determination operations may be performed at the host computing system by evaluating the data against some criteria or logic.
- the resource is a physical resource located at the host computing system, and the physical resource is accessible via an interconnect (e.g., a Compute Express Link (CXL) interconnect, where the physical resource is a CXL device that is accessible via a CXL root complex).
- the resource is a virtual resource accessible via at least one local virtual function or at least one remote virtual function (e.g., where the virtual resource corresponds to a central processing unit (CPU) or hardware compute device).
- operations are performed to provide a notification to a network infrastructure device (e.g., switch or gateway) that the resource is available for use in virtual resource pool.
- a network infrastructure device e.g., switch or gateway
- the network infrastructure device is a switch, gateway, or access point, that is located in a network connecting the host computing system and the client computing system.
- operations are performed to receive and process a request for the resource in virtual resource pool, for a request that is coordinated by the network infrastructure device.
- the request is coordinated via the network infrastructure device and provided to the host computing system based on network telemetry evaluated by the network infrastructure device.
- the request is coordinated via the network infrastructure device based on at least one of: attestation of the resource, attestation of the request, or attestation of the client computing system.
- operations are performed to verify attestation of the request or of the requesting computing system.
- attestation operations may be coordinated using a trusted attestation server and the attestation verification examples discussed above.
- operations are performed to service (e.g., fulfill, execute, perform operations based on) the request for the resource, and such service operations may be controlled, affected, or otherwise based on at least one quality of service (QoS) requirement or other service metrics or evaluative criteria.
- QoS quality of service
- the method 1000 may be performed by a networked processing unit at the host computing system, to process the request for the resource based on such a request originates from another networked processing unit (at the client computing system or elsewhere in the edge computing network).
- Other operations involving peer discovery and coordination such as where the networked processing unit or the another network processing unit use peer discovery logic to associate the network processing units with each other, may also be involved.
- Example 1 is a method performed at a host computing system for operating a virtual pool of resources in an edge computing network, the method comprising: identifying availability of a resource at the host computing system; transmitting, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network; receiving a request for the resource in the virtual resource pool, the request provided on behalf of a client computing system, wherein the request is coordinated via the network infrastructure device, and wherein the request includes at least one quality of service (QoS) requirement; and servicing the request for the resource, based on the at least one QoS requirement.
- QoS quality of service
- Example 2 the subject matter of Example 1 optionally includes subject matter where the resource is a physical resource located at the host computing system, wherein the physical resource is accessible via an interconnect, and wherein servicing the request includes providing access to the physical resource.
- Example 3 the subject matter of Example 2 optionally includes subject matter where the interconnect is a Compute Express Link (CXL) interconnect, and wherein the physical resource is a CXL device that is accessible via a CXL root complex.
- CXL Compute Express Link
- Example 4 the subject matter of any one or more of Examples 1-3 optionally include subject matter where the resource is a virtual resource accessible via at least one local virtual function or at least one remote virtual function, wherein servicing the request includes providing access to the virtual function.
- Example 5 the subject matter of Example 4 optionally includes subject matter where the virtual resource corresponds to a central processing unit (CPU) or hardware compute device.
- the virtual resource corresponds to a central processing unit (CPU) or hardware compute device.
- Example 6 the subject matter of any one or more of Examples 1-5 optionally include subject matter where the request is coordinated via the network infrastructure device based on a capability for the virtual resource pool to meet the QoS requirement.
- Example 7 the subject matter of any one or more of Examples 1-6 optionally include subject matter where the request is coordinated via the network infrastructure device based on at least one of: attestation of the resource, attestation of the request, or attestation of the client computing system.
- Example 8 the subject matter of any one or more of Examples 1-7 optionally include verifying an attestation of the request for the resource or attestation of the client computing system, using a trusted attestation server.
- Example 9 the subject matter of any one or more of Examples 1-8 optionally include subject matter where the request is coordinated via the network infrastructure device and provided to the host computing system based on network telemetry evaluated by the network infrastructure device.
- Example 10 the subject matter of any one or more of Examples 1-9 optionally include subject matter where the network infrastructure device is a switch, gateway, or access point, that is located in a network connecting the host computing system and the client computing system.
- the network infrastructure device is a switch, gateway, or access point, that is located in a network connecting the host computing system and the client computing system.
- Example 11 the subject matter of any one or more of Examples 1-10 optionally include subject matter where the method is performed by a networked processing unit at the host computing system, wherein the request for the resource originates from another networked processing unit at the client computing system, and wherein the networked processing unit or the another network processing unit utilize peer discovery logic to associate the respective units.
- Example 12 is a host computing system, comprising: a networked processing unit accessible in an edge computing network; and a storage medium including instructions embodied thereon, wherein the instructions, which when executed by the networked processing unit, configure the networked processing unit to: determine availability of a resource at the host computing system; transmit, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network; receive a request for the resource in the virtual resource pool, the request provided on behalf of a client computing system, wherein the request is coordinated via the network infrastructure device, and wherein the request includes at least one quality of service (QoS) requirement; and cause the host computing system to service the request for the resource, using at least the resource at the host computing system, based on the at least one QoS requirement.
- QoS quality of service
- Example 13 the subject matter of Example 12 optionally includes subject matter where the resource is a physical resource located at the host computing system, wherein the physical resource is accessible via an interconnect, and wherein causing the host computing system to service the request includes providing access to the physical resource.
- Example 14 the subject matter of Example 13 optionally includes subject matter where the interconnect is a Compute Express Link (CXL) interconnect, and wherein the physical resource is a CXL device that is accessible via a CXL root complex.
- CXL Compute Express Link
- Example 15 the subject matter of any one or more of Examples 12-14 optionally include subject matter where the resource is a virtual resource accessible via at least one local virtual function or at least one remote virtual function, wherein causing the host computing system to service the request includes providing access to the virtual function.
- Example 16 the subject matter of Example 15 optionally includes subject matter where the virtual resource corresponds to a central processing unit (CPU) or hardware compute device.
- the virtual resource corresponds to a central processing unit (CPU) or hardware compute device.
- Example 17 the subject matter of any one or more of Examples 12-16 optionally include subject matter where the request is coordinated via the network infrastructure device based on a capability for the virtual resource pool to meet the QoS requirement.
- Example 18 the subject matter of any one or more of Examples 12-17 optionally include subject matter where the request is coordinated via the network infrastructure device based on at least one of: attestation of the resource, attestation of the request, or attestation of the client computing system.
- Example 19 the subject matter of any one or more of Examples 12-18 optionally include subject matter where the instructions further configure the networked processing unit to: verify an attestation of the request for the resource or attestation of the client computing system, using a trusted attestation server.
- Example 20 the subject matter of any one or more of Examples 12-19 optionally include subject matter where the request is coordinated via the network infrastructure device and provided to the host computing system based on network telemetry evaluated by the network infrastructure device.
- Example 21 the subject matter of any one or more of Examples 12-20 optionally include subject matter where the network infrastructure device is a switch, gateway, or access point, that is located in a network connecting the host computing system and the client computing system.
- the network infrastructure device is a switch, gateway, or access point, that is located in a network connecting the host computing system and the client computing system.
- Example 22 the subject matter of any one or more of Examples 12-21 optionally include subject matter where the request for the resource originates from another networked processing unit at the client computing system, and wherein the networked processing unit or the another network processing unit utilize peer discovery logic to associate the respective units.
- Example 24 is an apparatus of an edge computing system comprising means to implement any of Examples 1-23, or other subject matter described herein.
- Example 25 is an apparatus of an edge computing system comprising logic, modules, circuitry, or other means to implement any of Examples 1-23, or other subject matter described herein.
- Example 26 is a networked processing unit (e.g., an infrastructure processing unit as discussed here) or system including a networked processing unit, configured to implement any of Examples 1-23, or other subject matter described herein.
- a networked processing unit e.g., an infrastructure processing unit as discussed here
- system including a networked processing unit configured to implement any of Examples 1-23, or other subject matter described herein.
- Example 27 is an edge computing system, including respective edge processing devices and nodes to invoke or perform any of the operations of Examples 1-23, or other subject matter described herein.
- Example 28 is an edge computing system including aspects of network functions, acceleration functions, acceleration hardware, storage hardware, or computation hardware resources, operable to invoke or perform the use cases discussed herein, with use of any Examples 1-23, or other subject matter described herein.
- Example 29 is a system to implement any of Examples 1-28.
- Example 30 is a method to implement any of Examples 1-28.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Power Engineering (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Various approaches for deploying and using virtual pools of compute resources with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed. A host computing system may be configured to operate a virtual pool of resources, with operations including: identifying, at the host computing system, availability of a resource at the host computing system; transmitting, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network; receiving a request for the resource in the virtual resource pool that is provided on behalf of a client computing system, based on the request being coordinated via the network infrastructure device and includes at least one quality of service (QoS) requirement; and servicing the request for the resource, based on the at least one QoS requirement.
Description
- This application claims the benefit of priority to United States Provisional Patent Application No. 63/425,857, filed Nov. 16, 2022, and titled “COORDINATION OF DISTRIBUTED NETWORKED PROCESSING UNITS”, which is incorporated herein by reference in its entirety.
- Embodiments described herein generally relate to data processing, network communication, and communication system implementations of distributed computing, including the implementations with the use of networked processing units such as infrastructure processing units (IPUs) or data processing units (DPUs).
- System architectures are moving to highly distributed multi-edge and multi-tenant deployments. Deployments may have different limitations in terms of power and space. Deployments also may use different types of compute, acceleration, and storage technologies in order to overcome these power and space limitations. Deployments also are typically interconnected in tiered and/or peer-to-peer fashion, in an attempt to create a network of connected devices and edge appliances that work together.
- Edge computing, at a general level, has been described as systems that provide the transition of compute and storage resources closer to endpoint devices at the edge of a network (e.g., consumer computing devices, user equipment, etc.). As compute and storage resources are moved closer to endpoint devices, a variety of advantages have been promised such as reduced application latency, improved service capabilities, improved compliance with security or data privacy requirements, improved backhaul bandwidth, improved energy consumption, and reduced cost. However, many deployments of edge computing technologies—especially complex deployments for use by multiple tenants—have not been fully adopted.
- In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. Some embodiments are illustrated by way of example, and not limitation, in the figures of the accompanying drawings in which:
-
FIG. 1 illustrates an overview of a distributed edge computing environment, according to an example; -
FIG. 2 depicts computing hardware provided among respective deployment tiers in a distributed edge computing environment, according to an example; -
FIG. 3 depicts additional characteristics of respective deployments tiers in a distributed edge computing environment, according to an example; -
FIG. 4 depicts a computing system architecture including a compute platform and a network processing platform provided by an infrastructure processing unit, according to an example; -
FIG. 5 depicts an infrastructure processing unit arrangement operating as a distributed network processing platform within network and data center edge settings, according to an example; -
FIG. 6 depicts functional components of an infrastructure processing unit and related services, according to an example; -
FIG. 7 depicts a block diagram of example components in an edge computing system which implements a distributed network processing platform, according to an example; -
FIG. 8 depicts an arrangement of distributed processing provided at an edge computing network layer, according to an example; -
FIG. 9 depicts a distributed computing environment enabled for operation of virtual pools, according to an example; and -
FIG. 10 depicts a flowchart of a method for operating a virtual pool of resources at a host computing system, for a virtual pool that is accessible across distributed computing entities, according to an example. - The following discusses configurations and operations of networked processing units, specifically infrastructure processing units (IPUs), to connect different systems with different capabilities in a distributed mesh network. In this configuration, IPUs can be used to expose and share resources across different nodes in resource pools, with such distributed resource pools referred to herein as “Virtual Pools” or “Vpools”.
- The approaches discussed herein include creation of a virtual pool using one or more network infrastructure entities such as switches, gateways, and other networking devices. For instance, a virtual pool may be created between an IPU and a network infrastructure device to access physical functions and resources at a compute node that hosts the IPU. Such physical functions and resources may be accessed by the IPU using an interconnect such as Compute Express Link (CXL), to access and pool any number of disaggregated resources at the compute node. As is understood, CXL provides an interconnect with access to many types of local devices (e.g., GPUs, memory, storage, accelerators, etc.), including resource instances that can be accessed by different tenants and servers. Another aspect discussed herein relates to the creation of a virtual pool to access virtual resources from the IPU. For instance, features of logic are discussed to provide access to virtual functions and resources, and to access physical resources from the virtual resources while ensuring an end-to-end quality of service for use of the virtual pool. These and other features of a virtual pool are listed in more detail after an overview of distributed computing and IPUs.
-
FIG. 1 is a block diagram 100 showing an overview of a distributed edge computing environment, which may be adapted for implementing the present techniques for distributed networked processing units. As shown, theedge cloud 110 is established from processing operations among one or more edge locations, such as asatellite vehicle 141, abase station 142, anetwork access point 143, an onpremise server 144, anetwork gateway 145, or similar networked devices and equipment instances. These processing operations may be coordinated by one or moreedge computing platforms 120 or systems that operate networked processing units (e.g., IPUs, DPUs) as discussed herein. - The
edge cloud 110 is generally defined as involving compute that is located closer to endpoints 160 (e.g., consumer and producer data sources) than thecloud 130, such asautonomous vehicles 161,user equipment 162, business andindustrial equipment 163,video capture devices 164,drones 165, smart cities andbuilding devices 166, sensors andIoT devices 167, etc. Compute, memory, network, and storage resources that are offered at the entities in theedge cloud 110 can provide ultra-low or improved latency response times for services and functions used by the endpoint data sources as well as reduce network backhaul traffic from theedge cloud 110 towardcloud 130 thus improving energy consumption and overall network usages among other benefits. - Compute, memory, and storage are scarce resources, and generally decrease depending on the edge location (e.g., fewer processing resources being available at consumer end point devices than at a base station or a central office data center). As a general design principle, edge computing attempts to minimize the number of resources needed for network services, through the distribution of more resources that are located closer both geographically and in terms of in-network access time.
-
FIG. 2 depicts examples of computing hardware provided among respective deployment tiers in a distributed edge computing environment. Here, one tier at an on-premise edge system is an intelligent sensor orgateway tier 210, which operates network devices with low power and entry-level processors and low-power accelerators. Another tier at an on-premise edge system is anintelligent edge tier 220, which operates edge nodes with higher power limitations and may include a high-performance storage. - Further in the network, a
network edge tier 230 operates servers including form factors optimized for extreme conditions (e.g., outdoors). A datacenter edge tier 240 operates additional types of edge nodes such as servers, and includes increasingly powerful or capable hardware and storage technologies. Still further in the network, a coredata center tier 250 and apublic cloud tier 260 operate compute equipment with the highest power consumption and largest configuration of processors, acceleration, storage/memory devices, and highest throughput network. - In each of these tiers, various forms of Intel® processor lines are depicted for purposes of illustration; it will be understood that other brands and manufacturers of hardware will be used in real-world deployments. Additionally, it will be understood that additional features or functions may exist among multiple tiers. One such example is connectivity and infrastructure management that enable a distributed IPU architecture, that can potentially extend across all of
tiers -
FIG. 3 depicts additional characteristics of respective deployment tiers in a distributed edge computing environment, based on the tiers discussed with reference toFIG. 2 . This figure depicts additional network latencies at each of thetiers - With these variations and service features in mind, edge computing within the
edge cloud 110 may provide the ability to serve and respond to multiple applications of the use cases in real-time or near real-time and meet ultra-low latency requirements. As systems have become highly-distributed, networking has become one of the fundamental pieces of the architecture that allow achieving scale with resiliency, security, and reliability. Networking technologies have evolved to provide more capabilities beyond pure network routing capabilities, including to coordinate quality of service, security, multi-tenancy, and the like. This has also been accelerated by the development of new smart network adapter cards and other type of network derivatives that incorporated capabilities such as ASICs (application-specific integrated circuits) or FPGAs (field programmable gate arrays) to accelerate some of those functionalities (e.g., remote attestation). - In these contexts, networked processing units have begun to be deployed at network cards (e.g., smart NICs), gateways, and the like, which allow direct processing of network workloads and operations. One example of a networked processing unit is an infrastructure processing unit (IPU), which is a programmable network device that can be extended to provide compute capabilities with far richer functionalities beyond pure networking functions. Another example of a network processing unit is a data processing unit (DPU), which offers programmable hardware for performing infrastructure and network processing operations. The following discussion refers to functionality applicable to an IPU configuration, such as that provided by an Intel® line of IPU processors. However, it will be understood that functionality will be equally applicable to DPUs and other types of networked processing units provided by ARM®, Nvidia®, and other hardware OEMs.
-
FIG. 4 depicts an example compute system architecture that includes acompute platform 420 and a network processing platform comprising anIPU 410. This architecture—and in particular theIPU 410—can be managed, coordinated, and orchestrated by the functionality discussed below, including with the functions described with reference toFIG. 6 . - The
main compute platform 420 is composed by typical elements that are included with a computing node, such as one ormore CPUs 424 that may or may not be connected via a coherent domain (e.g., via Ultra Path Interconnect (UPI) or another processor interconnect); one ormore memory units 425; one or more additionaldiscrete devices 426 such as storage devices, discrete acceleration cards (e.g., a field-programmable gate array (FPGA), a visual processing unit (VPU), etc.); abaseboard management controller 421; and the like. Thecompute platform 420 may operate one or more containers 422 (e.g., with one or more microservices), within a container runtime 423 (e.g., Docker containerd). TheIPU 410 operates as a networking interface and is connected to thecompute platform 420 using an interconnect (e.g., using either PCIe or CXL). TheIPU 410, in this context, can be observed as another small compute device that has its own: (1) Processing cores (e.g., provided by low-power cores 417), (2) operating system (OS) and cloudnative platform 414 to operate one ormore containers 415 and acontainer runtime 416; (3) Acceleration functions provided by anASIC 411 orFPGA 412; (4)Memory 418; (5) Network functions provided bynetwork circuitry 413; etc. - From a system design perspective, this arrangement provides important functionality. The
IPU 410 is seen as a discrete device from the local host (e.g., the OS running in the compute platform CPUs 424) that is available to provide certain functionalities (networking, acceleration etc.). Those functionalities are typically provided via Physical or Virtual PCIe functions. Additionally, theIPU 410 is seen as a host (with its own IP etc.) that can be accessed by the infrastructure to setup an OS, run services, and the like. TheIPU 410 sees all the traffic going to thecompute platform 420 and can perform actions—such as intercepting the data or performing some transformation—as long as the correct security credentials are hosted to decrypt the traffic. Traffic going through the IPU goes to all the layers of the Open Systems Interconnection model (OSI model) stack (e.g., from physical to application layer). Depending on the features that the IPU has, processing may be performed at the transport layer only. However, if the IPU has capabilities to perform traffic intercept, then the IPU also may be able to intercept traffic at the traffic layer (e.g., intercept CDN traffic and process it locally). - Some of the use cases being proposed for IPUs and similar networked processing units include: to accelerate network processing; to manage hosts (e.g., in a data center); or to implement quality of service policies. However, most of functionalities today are focused at using the IPU at the local appliance level and within a single system. These approaches do not address how the IPUs could work together in a distributed fashion or how system functionalities can be divided among the IPUs on other parts of the system. Accordingly, the following introduces enhanced approaches for enabling and controlling distributed functionality among multiple networked processing units. This enables the extension of current IPU functionalities to work as a distributed set of IPUs that can work together to achieve stronger features such as, resiliency, reliability, etc.
- Distributed Architectures of IPUs
-
FIG. 5 depicts an IPU arrangement operating as a distributed network processing platform within network and data center edge settings. In a first deployment model of acomputing environment 510, workloads or processing requests are directly provided to an IPU platform, such as directly toIPU 514. In a second deployment model of thecomputing environment 510, workloads or processing requests are provided to someintermediate processing device 512, such as a gateway or NUC (next unit of computing) device form factor, and theintermediate processing device 512 forwards the workloads or processing requests to theIPU 514. It will be understood that a variety of other deployment models involving the composability and coordination of one or more IPUs, compute units, network devices, and other hardware may be provided. - With the first deployment model, the
IPU 514 directly receives data fromuse cases 502A. TheIPU 514 operates one or more containers with microservices to perform processing of the data. As an example, a small gateway (e.g., a NUC type of appliance) may connect multiple cameras to an edge system that is managed or connected by theIPU 514. TheIPU 514 may process data as a small aggregator of sensors that runs on the far edge, or may perform some level of inline or preprocessing and that sends payload to be further processed by the IPU or the system that the IPU connects. - With the second deployment model, the
intermediate processing device 512 provided by the gateway or NUC receives data fromuse cases 502B. Theintermediate processing device 512 includes various processing elements (e.g., CPU cores, GPUs), and may operate one or more microservices for servicing workloads from theuse cases 502B. However, theintermediate processing device 512 invokes theIPU 514 to complete processing of the data. - In either the first or the second deployment model, the
IPU 514 may connect with a local compute platform, such as that provided by a CPU 516 (e.g., Intel® Xeon CPU) operating multiple microservices. The IPU may also connect with a remote compute platform, such as that provided at a data center byCPU 540 at a remote server. As an example, consider a microservice that performs some analytical processing (e.g., face detection on image data), where theCPU 516 and theCPU 540 provide access to this same microservice. TheIPU 514, depending on the current load of theCPU 516 and theCPU 540, may decide to forward the images or payload to one of the two CPUs. Data forwarding or processing can also depend on other factors such as SLA for latency or performance metrics (e.g., perf/watt) in the two systems. As a result, the distributed IPU architecture may accomplish features of load balancing. - The IPU in the
computing environment 510 may be coordinated with other network-connected IPUs. In an example, a Service andInfrastructure orchestration manager 530 may use multiple IPUs as a mechanism to implement advanced service processing schemes for the user stacks. This may also enable implementing of system functionalities such as failover, load balancing etc. - In a distributed architecture example, IPUs can be arranged in the following non-limiting configurations. As a first configuration, a particular IPU (e.g., IPU 514) can work with other IPUs (e.g., IPU 520) to implement failover mechanisms. For example, an IPU can be configured to forward traffic to service replicas that runs on other systems when a local host does not respond.
- As a second configuration, a particular IPU (e.g., IPU 514) can work with other IPUs (e.g., IPU 520) to perform load balancing across other systems. For example, consider a scenario where CDN traffic targeted to the local host is forwarded to another host in case that I/O or compute in the local host is scarce at a given moment.
- As a third configuration, a particular IPU (e.g., IPU 514) can work as a power management entity to implement advanced system policies. For example, consider a scenario where the whole system (e.g., including CPU 516) is placed in a C6 state (a low-power/power-down state available to a processor) while forwarding traffic to other systems (e.g., IPU 520) and consolidating it.
- As will be understood, fully coordinating a distributed IPU architecture requires numerous aspects of coordination and orchestration. The following examples of system architecture deployments provide discussion of how edge computing systems may be adapted to include coordinated IPUs, and how such deployments can be orchestrated to use IPUs at multiple locations to expand to the new envisioned functionality.
- Distributed IPU Functionality
- An arrangement of distributed IPUs offers a set of new functionalities to enable IPUs to be service focused.
FIG. 6 depicts functional components of anIPU 610, including services and features to implement the distributed functionality discussed herein. It will be understood that some or all of the functional components provided inFIG. 6 may be distributed among multiple IPUs, hardware components, or platforms, depending on the particular configuration and use case involved. - In the block diagram of
FIG. 6 , a number of functional components are operated to manage requests for a service running in the IPU (or running in the local host). As discussed above, IPUs can either run services or intercept requests arriving to services running in the local host and perform some action. In the latter case, the IPU can perform the following types of actions/functions (provided as a non-limiting examples). - Peer Discovery. In an example, each IPU is provided with Peer Discovery logic to discover other IPUs in the distributed system that can work together with it. Peer Discovery logic may use mechanisms such as broadcasting to discover other IPUs that are available on a network. The Peer Discovery logic is also responsible to work with the Peer Attestation and Authentication logic to validate and authenticate the peer IPU's identity, determine whether they are trustworthy, and whether the current system tenant allows the current IPU to work with them. To accomplish this, an IPU may perform operations such as: retrieve a proof of identity and proof of attestation; connect to a trusted service running in a trusted server; or, validate that the discovered system is trustworthy. Various technologies (including hardware components or standardized software implementations) that enable attestation, authentication, and security may be used with such operations.
- Peer Attestation. In an example, each IPU provides interfaces to other IPUs to enable attestation of the IPU itself. IPU Attestation logic is used to perform an attestation flow within a local IPU in order to create the proof of identity that will be shared with other IPUs. Attestation here may integrate previous approaches and technologies to attest a compute platform. This may also involve the use of trusted
attestation service 640 to perform the attestation operations. - Functionality Discovery. In an example, a particular IPU includes capabilities to discover the functionalities that peer IPUs provide. Once the authentication is done, the IPU can determine what functionalities that the peer IPUs provide (using the IPU Peer Discovery Logic) and store a record of such functionality locally. Examples of properties to discover can include: (i) Type of IPU and functionalities provided and associated KPIs (e.g. performance/watt, cost etc.); (ii) Available functionalities as well as possible functionalities to execute under secure enclaves (e.g., enclaves provided by Intel® SGX or TDX technologies); (iii) Current services that are running on the IPU and on the system that can potentially accept requests forwarded from this IPU; or (iv) Other interfaces or hooks that are provided by an IPU, such as: Access to remote storage; Access to a remote VPU; Access to certain functions. In a specific example, service may be described by properties such as: UUID; Estimated performance KPIs in the host or IPU; Average performance provided by the system during the N units of time (or any other type of indicator); and like properties.
- Service Management. The IPU includes functionality to manage services that are running either on the host compute platform or in the IPU itself. Managing (orchestration) services includes performance service and resource orchestration for the services that can run on the IPU or that the IPU can affect. Two type of usage models are envisioned:
- External Orchestration Coordination. The IPU may enable external orchestrators to deploy services on the IPU compute capabilities. To do so, an IPU includes a component similar to K8 compatible APIs to manage the containers (services) that run on the IPU itself. For example, the IPU may run a service that is just providing content to storage connected to the platform. In this case, the orchestration entity running in the IPU may manage the services running in the IPU as it happens in other systems (e.g. keeping the service level objectives).
- Further, external orchestrators can be allowed to register to the IPU that services are running on the host may require to broker requests, implement failover mechanisms and other functionalities. For example, an external orchestrator may register that a particular service running on the local compute platform is replicated in another edge node managed by another IPU where requests can be forwarded.
- In this later use case external orchestrators may provide to the Service/Application Intercept logic the inputs that are needed to intercept traffic for these services (as typically is encrypted). This may include properties such as a source and destination traffic of the traffic to be intercepted, or the key to use to decrypt the traffic. Likewise, this may be needed to terminate TLS to understand the requests that arrive to the IPU and that the other logics may need to parse to take actions. For example, if there is a CDN read request the IPU may need to decrypt the packet to understand that network packet includes a read request and may redirect it to another host based on the content that is being intercepted. Examples of Service/Application Intercept information is depicted in table 620 in
FIG. 6 . - External Orchestration Implementation. External orchestration can be implemented in multiple topologies. One supported topology includes having the orchestrator managing all the IPUs running on the backend public or private cloud. Another supported topology includes having the orchestrator managing all the IPUs running in a centralized edge appliance. Still another supported topology includes having the orchestrator running in another IPU that is working as the controller or having the orchestrator running distributed in multiple other IPUs that are working as controllers (master/primary node), or in a hierarchical arrangement.
- Functionality for Broker requests. The IPU may include Service Request Brokering logic and Load Balancing logic to perform brokering actions on arrival for requests of target services running in the local system. For instance, the IPU may decide to see if those requests can be executed by other peer systems (e.g., accessible through Service and Infrastructure Orchestration 630). This can be caused, for example, because load in the local systems is high. The local IPU may negotiate with other peer IPUs for the possibility to forward the request. Negotiation may involve metrics such as cost. Based on such negotiation metrics, the IPU may decide to forward the request.
- Functionality for Load Balancing requests. The Service Request Brokering and Load Balancing logic may distribute requests arriving to the local IPU to other peer IPUs. In this case, the other IPUs and the local IPU work together and do not necessarily need brokering. Such logic acts similar to a cloud native sidecar proxy. For instance, requests arriving to the system may be sent to the service X running in the local system (either IPU or compute platform) or forwarded to a peer IPU that has another instance of service X running The load balancing distribution can be based on existing algorithms such as based on the systems that have lower load, using round robin, etc.
- Functionality for failover, resiliency and reliability. The IPU includes Reliability and Failover logic to monitor the status of the services running on the compute platform or the status of the compute platform itself. The Reliability and Failover logic may require the Load Balancing logic to transiently or permanently forward requests that aim specific services in situations such as where: i) The compute platform is not responding; ii) The service running inside the compute node is not responding; and iii) The compute platform load prevents the targeted service to provide the right level of service level objectives (SLOs). Note that the logic must know the required SLOs for the services. Such functionality may be coordinated with
service information 650 including SLO information. - Functionality for executing parts of the workloads. Use cases such as video analytics tend to be decomposed in different microservices that conform a pipeline of actions that can be used together. The IPU may include a workload pipeline execution logic that understands how workloads are composed and manage their execution. Workloads can be defined as a graph that connects different microservices. The load balancing and brokering logic may be able to understand those graphs and decide what parts of the pipeline are executed where. Further, to perform these and other operations, Intercept logic will also decode what requests are included as part of the requests.
- Resource Management
- A distributed network processing configuration may enable IPUs to perform important role for managing resources of edge appliances. As further shown in
FIG. 6 , the functional components of an IPU can operate to perform these and similar types of resource management functionalities. - As a first example, an IPU can provide management or access to external resources that are hosted in other locations and expose them as local resources using constructs such as Compute Express Link (CXL). For example, the IPU could potentially provide access to a remote accelerator that is hosted in a remote system via CXL.mem/cache and IO. Another example includes providing access to remote storage device hosted in another system. In this later case the local IPU could work with another IPU in the storage system and expose the remote system as PCIE VF/PF (virtual functions/physical functions) to the local host.
- As a second example, an IPU can provide access to IPU-specific resources. Those IPU resource may be physical (such as storage or memory) or virtual (such as a service that provides access to random number generation).
- As a third example, an IPU can manage local resources that are hosted in the system where it belongs. For example, the IPU can manage power of the local compute platform.
- As a fourth example, an IPU can provide access to other type of elements that relate to resources (such as telemetry or other types of data). In particular, telemetry provides useful data for something that is needed to decide where to execute things or to identify problems.
- I/O Management. Because the IPU is acting as a connection proxy between the external peers (compute systems, remote storage etc.) resources and the local compute, the IPU can also include functionality to manage I/O from the system perspective.
- Host Virtualization and XPU Pooling. The IPU includes Host Virtualization and XPU Pooling logic responsible to manage the access to resources that are outside the system domain (or within the IPU) and that can be offered to the local compute system. Here, “XPU” refers to any type of a processing unit, whether CPU, GPU, VPU, an acceleration processing unit, etc. The IPU logic, after discovery and attestation, can agree with other systems to share external resources with the services running in the local system. IPUs may advertise to other peers available resources or can be discovered during discovery phase as introduced earlier. IPUs may request to other IPUS to those resources. For example IPU on system A may request access to storage on system B manage by another IPU. Remote and local IPUs can work together to establish a connection between the target resources and the local system.
- Once the connection and resource mapping is completed, resources can be exposed to the services running in the local compute node using the VF/PF PCIE and CXL Logic. Each of those resources can be offered as VF/PF. The IPU logic can expose to the local host resources that are hosted in the IPU. Examples of resources to expose may include local accelerators, access to services, and the like.
- Power Management. Power management is one of the key features to achieve favorable system operational expenditures (OPEXs). IPU is very well positioned to optimize power consumption that the local system is consuming. The Distributed and local power management unit: Is responsible to meter the power that the system is consuming, the load that the system is receiving and track the service level agreements that the various services running in the system are achieving for the arriving requests. Likewise, when power efficiencies (e.g., power usage effectiveness (PUE)) are not achieving certain thresholds or the local compute demand is low, the IPU may decide to forward the requests to local services to other IPUs that host replicas of the services. Such power management features may also coordinate with the Brokering and Load Balancing logic discussed above. As will be understood, IPUs can work together to decide where requests can be consolidated to establish higher power efficiency as system. When traffic is redirected, the local power consumption can be reduced in different ways. Example operations that can be performed include: changing the system to C6 State; changing the base frequencies; performing other adaptations of the system or system components.
- Telemetry Metrics. The IPU can generate multiple types of metrics that can be interesting from services, orchestration or tenants owning the system. In various examples, telemetry can be accessed, including: (i) Out of band via side interfaces; (ii) In band by services running in the IPU; or (iii) Out of band using PCIE or CXL from the host perspective. Relevant types of telemetries can include: Platform telemetry; Service Telemetry; IPU telemetry; Traffic telemetry; and the like.
- System Configurations for Distributed Processing
- Further to the examples noted above, the following configurations may be used for processing with distributed IPUs:
- 1) Local IPUs connected to a compute platform by an interconnect (e.g., as shown in the configuration of
FIG. 4 ); - 2) Shared IPUs hosted within a rack/physical network—such as in a virtual slice or multi-tenant implementation of IPUs connected via CXL/PCI-E (local), or extension via Ethernet/Fiber for nodes within a cluster;
- 3) Remote IPUs accessed via an IP Network, such as within certain latency for data plane offload/storage offloads (or, connected for management/control plane operations); or
- 4) Distributed IPUs providing an interconnected network of IPUs, including as many as hundreds of nodes within a domain.
- Configurations of distributed IPUs working together may also include fragmented distributed IPUs, where each IPU or pooled system provides part of the functionalities, and each IPU becomes a malleable system. Configurations of distributed IPUs may also include virtualized IPUs, such as provided by a gateway, switch, or an inline component (e.g., inline between the service acting as IPU), and in some examples, in scenarios where the system has no IPU.
- Other deployment models for IPUs may include IPU-to-IPU in the same tier or a close tier; IPU-to-IPU in the cloud (data to compute versus compute to data); integration in small device form factors (e.g., gateway IPUs); gateway/NUC +IPU which connects to a data center; multiple GW/NUC (e.g. 16) which connect to one IPU (e.g. switch); gateway/NUC+IPU on the server; and GW/NUC and IPU that are connected to a server with an IPU.
- The preceding distributed IPU functionality may be implemented among a variety of types of computing architectures, including one or more gateway nodes, one or more aggregation nodes, or edge or core data centers distributed across layers of the network (e.g., in the arrangements depicted in
FIGS. 2 and 3 ). Accordingly, such IPU arrangements may be implemented in an edge computing system by or on behalf of a telecommunication service provider (“telco”, or “TSP”), internet-of-things service provider, cloud service provider (CSP), enterprise entity, or any other number of entities. Various implementations and configurations of the edge computing system may be provided dynamically, such as when orchestrated to meet service objectives. Such edge computing systems may be embodied as a type of device, appliance, computer, or other “thing” capable of communicating with other edge, networking, or endpoint components. -
FIG. 7 depicts a block diagram of example components in acomputing device 750 which can operate as a distributed network processing platform. Thecomputing device 750 may include any combinations of the components referenced above, implemented as integrated circuits (ICs), as a package or system-on-chip (SoC), or as portions thereof, discrete electronic devices, or other modules, logic, instruction sets, programmable logic or algorithms, hardware, hardware accelerators, software, firmware, or a combination thereof adapted in thecomputing device 750, or as components otherwise incorporated within a larger system. Specifically, thecomputing device 750 may include processing circuitry comprising one or both of a network processing unit 752 (e.g., an IPU or DPU, as discussed above) and a compute processing unit 754 (e.g., a CPU). - The
network processing unit 752 may provide a networked specialized processing unit such as an IPU, DPU, network processor, or other “xPU” outside of the central processing unit (CPU). The processing unit may be embodied as a standalone circuit or circuit package, integrated within an SoC, integrated with networking circuitry (e.g., in a SmartNlC), or integrated with acceleration circuitry, storage devices, or AI or specialized hardware, consistent with the examples above. - The
compute processing unit 754 may provide a processor as a central processing unit (CPU) microprocessor, multi-core processor, multithreaded processor, an ultra-low voltage processor, an embedded processor, or other forms of a special purpose processing unit or specialized processing unit for compute operations. - Either the
network processing unit 752 or thecompute processing unit 754 may be a part of a system on a chip (SoC) which includes components formed into a single integrated circuit or a single package. Thenetwork processing unit 752 or thecompute processing unit 754 and accompanying circuitry may be provided in a single socket form factor, multiple socket form factor, or a variety of other formats. - The
processing units system memory 756 may be embodied as volatile (e.g., dynamic random access memory (DRAM), etc.) memory. Any number of memory devices may be used to provide for a given amount of system memory. Astorage 758 may also couple to theprocessor 752 via theinterconnect 755 to provide for persistent storage of information such as data, applications, operating systems, and so forth. In an example, thestorage 758 may be implemented as non-volatile storage such as a solid-state disk drive (SSD). - The components may communicate over the
interconnect 755. Theinterconnect 755 may include any number of technologies, including industry-standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), Compute Express Link (CXL), or any number of other technologies. Theinterconnect 755 may couple theprocessing units transceiver 766, for communications withconnected edge devices 762. - The
transceiver 766 may use any number of frequencies and protocols. For example, a wireless local area network (WLAN) unit may implement Wi-Fi® communications in accordance with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, or a wireless wide area network (WWAN) unit may implement wireless wide area communications according to a cellular, mobile network, or other wireless wide area protocol. The wireless network transceiver 766 (or multiple transceivers) may communicate using multiple standards or radios for communications at a different range. A wireless network transceiver 766 (e.g., a radio transceiver) may be included to communicate with devices or services in theedge cloud 110 or thecloud 130 via local or wide area network protocols. - The communication circuitry (e.g.,
transceiver 766,network interface 768,external interface 770, etc.) may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., a cellular networking protocol such a 3GPP 4G or 5G standard, a wireless local area network protocol such as IEEE 802.11/Wi-Fi®, a wireless wide area network protocol, Ethernet, Bluetooth®, Bluetooth Low Energy, an IoT protocol such as IEEE 802.15.4 or ZigBee®, Matter®, low-power wide-area network (LPWAN) or low-power wide-area (LPWA) protocols, etc.) to effect such communication. Given the variety of types of applicable communications from the device to another component or network, applicable communications circuitry used by the device may include or be embodied by any one or more ofcomponents - The
computing device 750 may include or be coupled toacceleration circuitry 764, which may be embodied by one or more AI accelerators, a neural compute stick, neuromorphic hardware, an FPGA, an arrangement of GPUs, one or more SoCs, one or more CPUs, one or more digital signal processors, dedicated ASICs, or other forms of specialized processors or circuitry designed to accomplish one or more specialized tasks. These tasks may include AI processing (including machine learning, training, inferencing, and classification operations), visual data processing, network data processing, object detection, rule analysis, or the like. Accordingly, in various examples, applicable means for acceleration may be embodied by such acceleration circuitry. - The
interconnect 755 may couple theprocessing units external interface 770 that is used to connect additional devices or subsystems. The devices may includesensors 772, such as accelerometers, level sensors, flow sensors, optical light sensors, camera sensors, temperature sensors, global navigation system (e.g., GPS) sensors, pressure sensors, pressure sensors, and the like. The hub orinterface 770 further may be used to connect theedge computing node 750 toactuators 774, such as power switches, valve actuators, an audible sound generator, a visual warning device, and the like. - In some optional examples, various input/output (I/O) devices may be present within or connected to, the
edge computing node 750. For example, a display orother output device 784 may be included to show information, such as sensor readings or actuator position. Aninput device 786, such as a touch screen or keypad may be included to accept input. Anoutput device 784 may include any number of forms of audio or visual display, including simple visual outputs such as LEDs or more complex outputs such as display screens (e.g., LCD screens), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of theedge computing node 750. - A
battery 776 may power theedge computing node 750, although, in examples in which theedge computing node 750 is mounted in a fixed location, it may have a power supply coupled to an electrical grid, or the battery may be used as a backup or for temporary capabilities. A battery monitor/charger 778 may be included in theedge computing node 750 to track the state of charge (SoCh) of thebattery 776. The battery monitor/charger 778 may be used to monitor other parameters of thebattery 776 to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of thebattery 776. Apower block 780, or other power supply coupled to a grid, may be coupled with the battery monitor/charger 778 to charge thebattery 776. - In an example, the
instructions 782 on theprocessing units 752, 754 (separately, or in combination with theinstructions 782 of the machine-readable medium 760) may configure execution or operation of a trusted execution environment (TEE) 790. In an example, theTEE 790 operates as a protected area accessible to theprocessing units edge computing node 750 through theTEE 790 and theprocessing units - The
computing device 750 may be a server, appliance computing devices, and/or any other type of computing device with the various form factors discussed above. For example, thecomputing device 750 may be provided by an appliance computing device that is a self-contained electronic device including a housing, a chassis, a case, or a shell. - In an example, the
instructions 782 provided via thememory 756, thestorage 758, or theprocessing units readable medium 760 including code to direct theprocessor 752 to perform electronic operations in theedge computing node 750. Theprocessing units readable medium 760 over theinterconnect 755. For instance, the non-transitory, machine-readable medium 760 may be embodied by devices described for thestorage 758 or may include specific storage units such as optical disks, flash drives, or any number of other hardware devices. The non-transitory, machine-readable medium 760 may include instructions to direct theprocessing units - In further examples, a machine-readable medium also includes any tangible medium that is capable of storing, encoding, or carrying instructions for execution by a machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. A “machine-readable medium” thus may include but is not limited to, solid-state memories, and optical and magnetic media. The instructions embodied by a machine-readable medium may further be transmitted or received over a communications network using a transmission medium via a network interface device utilizing any one of a number of transfer protocols (e.g., HTTP).
- A machine-readable medium may be provided by a storage device or other apparatus which is capable of hosting data in a non-transitory format. In an example, information stored or otherwise provided on a machine-readable medium may be representative of instructions, such as instructions themselves or a format from which the instructions may be derived. This format from which the instructions may be derived may include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., split into multiple packages), or the like. The information representative of the instructions in the machine-readable medium may be processed by processing circuitry into the instructions to implement any of the operations discussed herein. For example, deriving the instructions from the information (e.g., processing by the processing circuitry) may include: compiling (e.g., from source code, object code, etc.), interpreting, loading, organizing (e.g., dynamically or statically linking), encoding, decoding, encrypting, unencrypting, packaging, unpackaging, or otherwise manipulating the information into the instructions.
- In an example, the derivation of the instructions may include assembly, compilation, or interpretation of the information (e.g., by the processing circuitry) to create the instructions from some intermediate or preprocessed format provided by the machine-readable medium. The information, when provided in multiple parts, may be combined, unpacked, and modified to create the instructions. For example, the information may be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers.
- In further examples, a software distribution platform (e.g., one or more servers and one or more storage devices) may be used to distribute software, such as the example instructions discussed above, to one or more devices, such as example processor platform(s) and/or example connected edge devices noted above. The example software distribution platform may be implemented by any computer server, data facility, cloud service, etc., capable of storing and transmitting software to other computing devices. In some examples, the providing entity is a developer, a seller, and/or a licensor of software, and the receiving entity may be consumers, users, retailers, OEMs, etc., that purchase and/or license the software for use and/or re-sale and/or sub-licensing.
- In some examples, the instructions are stored on storage devices of the software distribution platform in a particular format. A format of computer readable instructions includes, but is not limited to a particular code language (e.g., Java, JavaScript, Python, C, C#, SQL, HTML, etc.), and/or a particular code state (e.g., uncompiled code (e.g., ASCII), interpreted code, linked code, executable code (e.g., a binary), etc.). In some examples, the computer readable instructions stored in the software distribution platform are in a first format when transmitted to an example processor platform(s). In some examples, the first format is an executable binary in which particular types of the processor platform(s) can execute. However, in some examples, the first format is uncompiled code that requires one or more preparation tasks to transform the first format to a second format to enable execution on the example processor platform(s). For instance, the receiving processor platform(s) may need to compile the computer readable instructions in the first format to generate executable code in a second format that is capable of being executed on the processor platform(s). In still other examples, the first format is interpreted code that, upon reaching the processor platform(s), is interpreted by an interpreter to facilitate execution of instructions.
- Use of Distributed IPUs with Virtual Pools and Virtual Resources
- Compute requirements for portable devices (laptops, tablets, etc.) vary widely based on the use case of the respective device. Currently, some of the primary constraints that determine a given configuration for a computing system are power consumption (and battery life), form factor, and monetary or resource cost. Clearly, selecting a lower speed processor, powering on less memory capacity, and the like, results in consuming less power which in turn enables longer battery life. Fan-less designs may be possible and smaller chassis may be sufficient to enable usage of lighter, more portable form factors. However, these lightweight form factors may not be usable at the edge without access to more-powerful distributed computing resources.
- In the context of edge architectures and large-scale infrastructure, a deployment may include hundreds of edge devices with different resource utilizations and different acceleration capabilities. Improvements in device capabilities have generally increased connectivity speeds between these devices due to improvements to a variety of wireless and fiber/wired connections in networks and between devices. A consequence of increased connectivity speeds is that it is now possible to expect inter-device connectivity speeds that surpass a gigabyte per second. In addition, capabilities have become available to hot-plug components within systems with technologies such as CXL or PCIE. Whereas with prior approaches where all system components had to be recognized and pre-determined at boot time, newer systems provide increasing capability to add and expand system resources during run-time. Some capabilities even exist in systems to hot-plug memory and storage and compute resources (depending on the type of device/protocol to hot plug). At server computing systems, such hot-plugging capabilities are further enhanced by technologies such as memory pooling in the microprocessor architectures, where remote memory can be mapped, on-demand, to the local address map via CXL.
- A consequence of these trends is that the distinction between physical interconnects and buses that connect various device components and wired/wireless connectivity to the other devices in close proximity has diminished. Thus, the expectation is that newer technologies will continue to offer the use of connected edge computing resources (e.g., provided by nearby servers and compute nodes) to nearby edge devices. However, problems in resource discovery, provisioning, and management have not been fully addressed in distributed computing arrangements, especially relating to the use of physical and virtual resources.
- Consider for example, a corporation with users having personal laptops, or a user with multiple personal connected devices in close proximity When there is a need for additional resources (e.g., memory or compute resources), there is the potential to utilize trusted external resources at an on-premises server or nearby base station to address local needs via virtualization at the resource/device level. However, with existing systems there is a lack of mechanisms to discover, provision, and manage such resources. In the following, an infrastructure is proposed to map additional physical and virtual resources (e.g., memory or accelerators or CPUs) from remote devices as if they were local, with the use of virtual pools. Such configurations can be used to address peaks in resource requirements, and to enable a variety of new use cases.
- In the following, IPU devices and infrastructure devices perform coordinated operations to expose resources and devices within a virtual pool. This virtual pool provides a fabric between different devices and edge platforms with defined levels of quality of service, and with mechanisms to allow the infrastructure to verify that resources and devices/platforms that share devices are trustworthy. In contrast, such verification and quality of service features may be particularly complex and unable to operate in existing computing arrangements. For example, consider a small device attempting to access a GPU hosted in a base station, even though the small device does not have enough capabilities to perform attestation because of compute limitations or even security constructs to connect to the right system. These and other limitations can be overcome through the use of specialized logic and IPU operations.
-
FIG. 8 depicts an example arrangement of distributed processing provided at an edge computing network layer, using a distributed infrastructure processing unit mesh network. Specifically,FIG. 8 depicts computing operations coordinated among auser layer 810, anedge layer 820, and acloud layer 830. Consistent with the examples discussed above (e.g., with reference toFIGS. 1 to 3 ), edge computing operations may be performed at theedge layer 820 based on requests from client devices or consumers at theuser layer 810, such as from one or moreheterogenous networks 812, avehicular network 814, a machine-to-machine (M2M) or device-to-device (D2D) network (not shown), or other network arrangements. Theedge layer 820 may further invoke acloud layer 830 andcloud services 832 to perform further data processing or data retrieval (e.g., at one or more remote data centers or offices). - A variety of disaggregated resources available in the
edge layer 820 may be combined, pooled, or coordinated in order to perform tasks for clients and other consumers. For instance, resources at afirst base station 822A, includingcompute resources 842A, may be coordinated with thecompute resources 842B at asecond base station 822B. Other types of resources, not shown, may include communication resources, storage and caching resources, and the like, provided among a variety of devices or nodes. The resources may be arranged into compute pools, memory pools, or storage pools, coordinated via various interconnects and network protocols. - The use of a distributed
IPU mesh network 840 enables a variety of coordinated and distributed workload processing operations, including the creation and coordination of virtual pools as discussed herein. For instance, a first IPU (e.g.,IPU 844A) at a first node or system (e.g.,base station 822A) may invoke additional compute resources at a second node or system (e.g., computeresources 842B at base station 822) based on communications to a second IPU (e.g.,IPU 844B). A pool of resources and services may also be created and coordinated among multiple locations, based on communications between a first and a second IPU. Thus, workloads, workload tasks, processing operations, and other related concepts may be distributed across theIPU mesh network 840 based on the performance characteristics and coordination properties referenced herein. The arrangement of resource requests and servicing in the distributedIPU mesh network 840 from the resource pools discussed below may enable the use of a one-to-one (peer-to-peer) connections, one-to-many connections, or many-to-many connections. - In addition to the creation and maintenance of virtual pools, the following also addresses techniques for discovery and attestation of resources. Such techniques can ensure a correct level of end-to-end connectivity between different IPUs to enable a virtual pool. As an example, consider a scenario where IPUs provide access to local devices that can be exposed externally. An IPU can monitor different resources that are within the local system which hosts the IPU, the IPU can identify local resources that are not being used, and the IPU can also take the resources and physical functions and remove them. An IPU further may monitor functions across system, or even perform power management. This monitoring and re-configuration may also be performed on-demand For instance, whenever another device requires access to the physical functions, an IPU can expose the physical functions to the rest of the infrastructure. Likewise, the following refers to uses of telemetry by the IPUs and related logic to enable a robust use of virtual pools. When creating connectivity between two systems, the telemetry that is occurring between the systems can be accessed and evaluated by the IPUs.
-
FIG. 9 depicts an example distributed computing environment enabled for operation of virtual pools. This environment includes anInfrastructure Device 920 which is paired with anIPU 930 at a compute entity. TheIPU 930, in turn, is able to access various physical and virtual resources via one or more interconnects, such as via aCXL root complex 940. As depicted, theCXL root complex 940 is used to connect withCXL devices Devices 944 via local or remote virtual functions (VFs). Although CXL is depicted, resources may be exposed or supported via other types of interconnects - In the example of
FIG. 9 , multiple functions are added to enable coordination of virtual pools and virtual devices. The architecture is also expanded to provide logic features that that allow the edge devices to discover and expose resources to other peers and allow creation and maintenance of a virtual pool of resources (referred to in the following paragraphs as a Vpool). The logic features are implemented on at least onenetwork Infrastructure Device 920 such as switches or gateways, connected to at least oneIPU 930 operating at a device/node (not shown). In some examples, multiple network infrastructure devices (e.g., multiple networking entities) operate together to perform the applicable network processing or network functions of thenetwork Infrastructure Device 920. - The logic as implemented on the
network Infrastructure Device 920 may be provided by an IPU or other special programmed unit or circuitry at thenetwork Infrastructure Device 920. At theInfrastructure Device 920,System Telemetry Logic 921 collects telemetry data in the network for traffic management, quality of service management, and discovery, using the logic features discussed in the following paragraphs. - The
Infrastructure Device 920 includes Vpool Discovery andAdvertisement Logic 923. The Discovery andAdvertising Logic 923 may be used to discover what resources are exposed among the various edge nodes or devices that host resources. Every time that a new device or node is identified, the Vpool Discovery andAdvertisement Logic 923 will determine if the new device/node has an IPU that is enabled with resource sharing (e.g., an IPU configured with the functionality similar to IPU 930). If this new device/node has an IPU enabled with resource sharing, the Vpool Discovery andAdvertisement Logic 923 will add the device/node to a list of devices/nodes supporting resource pooling. Resources may be added to the corresponding Vpool when they become available. Over time, the IPU 930 (on behalf of the device/node) may reach out to the Vpool Discovery andAdvertising Logic 923 to notify that a certain type of resource is available in a certain quantity, and potentially during a given time. Resources also may be removed at different times. Other aspects of peer discovery and the operation of peer discovery logic, as discussed above, can be used to establish a relationship or association between a first IPU and a second IPU. - Vpool
Quality Attestation Logic 925 may be used to verify the trustworthiness of a resource (e.g., a newly introduced resource), such as by contacting anAttestation Server 910 to validate that theIPU 930 and its associated device/node is trustworthy. If attestation cannot be performed, then the device/node resources will be not added to a Vpool. - The Vpool
VLAN Traffic Logic 922 may be used to match requests coming from the clients that need specific resources for a certain amount of time and certain quality of service. A given IPU (on behalf of the client device/node, not shown inFIG. 9 ) may invoke this logic (e.g., after discovering a resource via the Vpool Discovery and Advertising Logic 923) to access to a certain resource type during a given time with a given quality of service. The VpoolVLAN Traffic Logic 922 also can identify the resources that are within the Vpool type that are capable to provide the required quality of service. - In an example, if there is not a resource satisfying the request from a client device/node, a response (e.g., a NACK) is returned. If there is a resource satisfying the request, then the logic at the
Infrastructure Device 920 will cause operations to (i) create a secure VLAN between the client device/node and the resource that is exposed by the target (host) device/node; (ii) communicate with both IPUs to start the use of the target resource; (iii) the target device/node IPU will connect the VF/PF of the device or the resource that is being shared (e.g. memory or core) to the client IPU; and (iv) the client device/node IPU will expose a new VF/PF or memory or compute to the host or device. In the latter case, memory or compute resources may be accessed via CXL.mem or CXL.cache. - The Vpool Quality of
Service Logic 924 may be used to manage infrastructure networking QoS, e.g., to ensure that the end-to-end networking bandwidth and latency is proper. The IPU on both sides (e.g., an IPU at the client device/node which consumes resources, and theIPU 930 at the hosting device/node which serves resources) will verify that the resources (e.g., I/O) and network are properly configured to achieve the QoS that is required. - In an example, the Vpool Quality of
Service Logic 924 may be used to estimate (or predict) the capability for the virtual resource pool to meet the QoS requirement. The Vpool Quality ofService Logic 924 may use intelligence to identify a current load, current work queues, and other usage information, to estimate or identify if the request can be handled in a manner that meets the QoS requirement. If the request cannot be handled while meeting the QoS requirement, then the response can be declined by the infrastructure. Thus, in addition to evaluating network telemetry, the Vpool Quality ofService Logic 924 may also evaluate the work queues and the pending workloads, current workloads, or forecasted workloads in the virtual resource pool. - The IPU 930 (e.g., an IPU containing the functionality of
FIG. 7 ) located at the hosting device/node may be configured with corresponding functionality to achieve the following functions. - The
Remote Vpool Mapping 931 provides a function responsible to exposing remote resources as local resources either via VF/PF or expanded compute resources available in the IPU 930 (not shown inFIG. 9 , but consistent with the compute resources discussed above with reference toFIGS. 4 to 6 ). - The Local Resource
Remote Exposure Logic 932 manages the local resources that are exposed into the remote devices or platforms. This logic will be responsible to provide the correct level of proof of identity that can be used by the infrastructure to validate and attest the local resources and devices. - The Discovery and
Advertisement Logic 933 works with the host platform (device/node) to determine what resources are going to be exposed and shared with the vLANs in the various pools. - The
Attestation Logic 934 may be responsible to validate the remote resources (if needed). In many examples, such attestation may be performed by theInfrastructure Device 920. However, in some examples, an - IPU located at the hosting device/node (or a client device/node) can also perform attestation operations.
- The
QoS Resource Logic 935 is responsible to perform resource shaping to monitor the quality of service that was requested by the origin (the client device/node). - As suggested above, the
IPU 930 and Infrastructure Device 920 (and, other IPU instances) may coordinate with a remote attestation entity provided by a trustedAttestation Server 910. TheAttestation Server 910 provides a mechanism to attest devices/nodes and potentially resources. - In an example, the
Attestation Server 910 uses a Trusted Entities andPersonas Database 911 to store the certificates and data that can be used to attest. TheAttestation Server 910 also operatesAttestation Logic 912 to perform the attestation operations. Other data or operations may be performed or invoked at theAttestation Server 910. - Further adaptation may be performed at other infrastructure devices to coordinate virtual pooling, such as via special functionality at switches. For example, an extensible switching element may be used to retain the primary routing functionality at the switch (e.g., as discussed above), and offload the actual execution of mesh services. The offloaded execution may be achieved via a Switch-connected-IPU element to one or more servers that can be powered and upgraded independently, and clustered as needed to expand capacity. Spine switches or other multi-layered switches also may be used to coordinate the volume compute, storage and memory infrastructure to which the Switch-connected-IPU bridges.
- Additionally, extensible switches can perform fast/slow triage of the resource operations discussed above (e.g., to determine whether a mesh function is to be performed by a Switch-based logic or vectored to attached compute). Also, extensible compute/memory/storage services can provide for a backup soft-switch function to shunt the switch out of the flow when the switch needs to be serviced or when the switch is being reinitialized and reintegrated into trust zones (e.g., following a firmware upgrade). Because a service mesh is expected to be transparent by design, its provisioning remains entirely flexible, and the use of an IPU as a bridge provides extremely low latency since the mesh services are only an IPU hop, as opposed to a network hop, from the Switch. Further, Top-of-Rack (ToR) switches can use a part of the rack itself, while spine switches may be linked to a plurality of independent-failure zones (such as, comprising separately powered systems).
-
FIG. 10 depicts a flowchart for an example method of operating a virtual pool of resources at a host computing system, for a virtual pool that is accessible across distributed computing entities. Themethod 1000 may be implemented by one or more networked processing units (e.g., IPUs) or other forms of processing circuitry, and instructions embodied thereon to be executed by the networked processing unit(s) (or processing circuitry), consistent with the examples and functionality of networked processing units, as discussed above. For instance, in the following operations, such a networked processing unit is included in a host computing system connected to an edge computing network, although other implementations may be possible. - At 1010, operations are performed to identify and/or determine availability of one or more resource(s) (e.g., at least one physical resource or virtual resource) at a host computing system. In an example, the identification operations may be performed by collecting or querying data relating to the status and state of the resources available to the host computing system, and the determination operations may be performed at the host computing system by evaluating the data against some criteria or logic. In an example, the resource is a physical resource located at the host computing system, and the physical resource is accessible via an interconnect (e.g., a Compute Express Link (CXL) interconnect, where the physical resource is a CXL device that is accessible via a CXL root complex). In another example, the resource is a virtual resource accessible via at least one local virtual function or at least one remote virtual function (e.g., where the virtual resource corresponds to a central processing unit (CPU) or hardware compute device).
- At 1020, operations are performed to provide a notification to a network infrastructure device (e.g., switch or gateway) that the resource is available for use in virtual resource pool. In specific examples, the network infrastructure device is a switch, gateway, or access point, that is located in a network connecting the host computing system and the client computing system.
- At 1030, operations are performed to receive and process a request for the resource in virtual resource pool, for a request that is coordinated by the network infrastructure device. In an example, the request is coordinated via the network infrastructure device and provided to the host computing system based on network telemetry evaluated by the network infrastructure device. In a further example, the request is coordinated via the network infrastructure device based on at least one of: attestation of the resource, attestation of the request, or attestation of the client computing system.
- At 1040, (optionally, if not performed at the network infrastructure device), operations are performed to verify attestation of the request or of the requesting computing system. Such attestation operations may be coordinated using a trusted attestation server and the attestation verification examples discussed above.
- At 1050, operations are performed to service (e.g., fulfill, execute, perform operations based on) the request for the resource, and such service operations may be controlled, affected, or otherwise based on at least one quality of service (QoS) requirement or other service metrics or evaluative criteria.
- As noted, the
method 1000 may be performed by a networked processing unit at the host computing system, to process the request for the resource based on such a request originates from another networked processing unit (at the client computing system or elsewhere in the edge computing network). Other operations involving peer discovery and coordination, such as where the networked processing unit or the another network processing unit use peer discovery logic to associate the network processing units with each other, may also be involved. - Additional Examples
- Additional examples of the presently described method, system, and device embodiments include the following, non-limiting implementations. Each of the following non-limiting examples may stand on its own or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure.
- Example 1 is a method performed at a host computing system for operating a virtual pool of resources in an edge computing network, the method comprising: identifying availability of a resource at the host computing system; transmitting, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network; receiving a request for the resource in the virtual resource pool, the request provided on behalf of a client computing system, wherein the request is coordinated via the network infrastructure device, and wherein the request includes at least one quality of service (QoS) requirement; and servicing the request for the resource, based on the at least one QoS requirement.
- In Example 2, the subject matter of Example 1 optionally includes subject matter where the resource is a physical resource located at the host computing system, wherein the physical resource is accessible via an interconnect, and wherein servicing the request includes providing access to the physical resource.
- In Example 3, the subject matter of Example 2 optionally includes subject matter where the interconnect is a Compute Express Link (CXL) interconnect, and wherein the physical resource is a CXL device that is accessible via a CXL root complex.
- In Example 4, the subject matter of any one or more of Examples 1-3 optionally include subject matter where the resource is a virtual resource accessible via at least one local virtual function or at least one remote virtual function, wherein servicing the request includes providing access to the virtual function.
- In Example 5, the subject matter of Example 4 optionally includes subject matter where the virtual resource corresponds to a central processing unit (CPU) or hardware compute device.
- In Example 6, the subject matter of any one or more of Examples 1-5 optionally include subject matter where the request is coordinated via the network infrastructure device based on a capability for the virtual resource pool to meet the QoS requirement.
- In Example 7, the subject matter of any one or more of Examples 1-6 optionally include subject matter where the request is coordinated via the network infrastructure device based on at least one of: attestation of the resource, attestation of the request, or attestation of the client computing system.
- In Example 8, the subject matter of any one or more of Examples 1-7 optionally include verifying an attestation of the request for the resource or attestation of the client computing system, using a trusted attestation server.
- In Example 9, the subject matter of any one or more of Examples 1-8 optionally include subject matter where the request is coordinated via the network infrastructure device and provided to the host computing system based on network telemetry evaluated by the network infrastructure device.
- In Example 10, the subject matter of any one or more of Examples 1-9 optionally include subject matter where the network infrastructure device is a switch, gateway, or access point, that is located in a network connecting the host computing system and the client computing system.
- In Example 11, the subject matter of any one or more of Examples 1-10 optionally include subject matter where the method is performed by a networked processing unit at the host computing system, wherein the request for the resource originates from another networked processing unit at the client computing system, and wherein the networked processing unit or the another network processing unit utilize peer discovery logic to associate the respective units.
- Example 12 is a host computing system, comprising: a networked processing unit accessible in an edge computing network; and a storage medium including instructions embodied thereon, wherein the instructions, which when executed by the networked processing unit, configure the networked processing unit to: determine availability of a resource at the host computing system; transmit, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network; receive a request for the resource in the virtual resource pool, the request provided on behalf of a client computing system, wherein the request is coordinated via the network infrastructure device, and wherein the request includes at least one quality of service (QoS) requirement; and cause the host computing system to service the request for the resource, using at least the resource at the host computing system, based on the at least one QoS requirement.
- In Example 13, the subject matter of Example 12 optionally includes subject matter where the resource is a physical resource located at the host computing system, wherein the physical resource is accessible via an interconnect, and wherein causing the host computing system to service the request includes providing access to the physical resource.
- In Example 14, the subject matter of Example 13 optionally includes subject matter where the interconnect is a Compute Express Link (CXL) interconnect, and wherein the physical resource is a CXL device that is accessible via a CXL root complex.
- In Example 15, the subject matter of any one or more of Examples 12-14 optionally include subject matter where the resource is a virtual resource accessible via at least one local virtual function or at least one remote virtual function, wherein causing the host computing system to service the request includes providing access to the virtual function.
- In Example 16, the subject matter of Example 15 optionally includes subject matter where the virtual resource corresponds to a central processing unit (CPU) or hardware compute device.
- In Example 17, the subject matter of any one or more of Examples 12-16 optionally include subject matter where the request is coordinated via the network infrastructure device based on a capability for the virtual resource pool to meet the QoS requirement.
- In Example 18, the subject matter of any one or more of Examples 12-17 optionally include subject matter where the request is coordinated via the network infrastructure device based on at least one of: attestation of the resource, attestation of the request, or attestation of the client computing system.
- In Example 19, the subject matter of any one or more of Examples 12-18 optionally include subject matter where the instructions further configure the networked processing unit to: verify an attestation of the request for the resource or attestation of the client computing system, using a trusted attestation server.
- In Example 20, the subject matter of any one or more of Examples 12-19 optionally include subject matter where the request is coordinated via the network infrastructure device and provided to the host computing system based on network telemetry evaluated by the network infrastructure device.
- In Example 21, the subject matter of any one or more of Examples 12-20 optionally include subject matter where the network infrastructure device is a switch, gateway, or access point, that is located in a network connecting the host computing system and the client computing system.
- In Example 22, the subject matter of any one or more of Examples 12-21 optionally include subject matter where the request for the resource originates from another networked processing unit at the client computing system, and wherein the networked processing unit or the another network processing unit utilize peer discovery logic to associate the respective units.
- Example 24 is an apparatus of an edge computing system comprising means to implement any of Examples 1-23, or other subject matter described herein.
- Example 25 is an apparatus of an edge computing system comprising logic, modules, circuitry, or other means to implement any of Examples 1-23, or other subject matter described herein.
- Example 26 is a networked processing unit (e.g., an infrastructure processing unit as discussed here) or system including a networked processing unit, configured to implement any of Examples 1-23, or other subject matter described herein.
- Example 27 is an edge computing system, including respective edge processing devices and nodes to invoke or perform any of the operations of Examples 1-23, or other subject matter described herein.
- Example 28 is an edge computing system including aspects of network functions, acceleration functions, acceleration hardware, storage hardware, or computation hardware resources, operable to invoke or perform the use cases discussed herein, with use of any Examples 1-23, or other subject matter described herein.
- Example 29 is a system to implement any of Examples 1-28.
- Example 30 is a method to implement any of Examples 1-28.
- Although these implementations have been described concerning specific exemplary aspects, it will be evident that various modifications and changes may be made to these aspects without departing from the broader scope of the present disclosure. Many of the arrangements and processes described herein can be used in combination or in parallel implementations that involve terrestrial network connectivity (where available) to increase network bandwidth/throughput and to support additional edge services. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show, by way of illustration, and not of limitation, specific aspects in which the subject matter may be practiced. The aspects illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other aspects may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed
- Description, therefore, is not to be taken in a limiting sense, and the scope of various aspects is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
- Such aspects of the inventive subject matter may be referred to herein, individually and/or collectively, merely for convenience and without intending to voluntarily limit the scope of this application to any single aspect or inventive concept if more than one is disclosed. Thus, although specific aspects have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific aspects shown. This disclosure is intended to cover any adaptations or variations of various aspects. Combinations of the above aspects and other aspects not specifically described herein will be apparent to those of skill in the art upon reviewing the above description.
Claims (25)
1. A method performed at a host computing system for operating a virtual pool of resources in an edge computing network, the method comprising:
identifying availability of a resource at the host computing system;
transmitting, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network;
receiving a request for the resource in the virtual resource pool, the request provided on behalf of a client computing system, wherein the request is coordinated via the network infrastructure device, and wherein the request includes at least one quality of service (QoS) requirement; and
servicing the request for the resource, based on the at least one QoS requirement.
2. The method of claim 1 , wherein the resource is a physical resource located at the host computing system, wherein the physical resource is accessible via an interconnect, and wherein servicing the request includes providing access to the physical resource.
3. The method of claim 2 , wherein the interconnect is a Compute Express Link (CXL) interconnect, and wherein the physical resource is a CXL device that is accessible via a CXL root complex.
4. The method of claim 1 , wherein the resource is a virtual resource accessible via at least one local virtual function or at least one remote virtual function, and wherein servicing the request includes providing access to the virtual function.
5. The method of claim 4 , wherein the virtual resource corresponds to a central processing unit (CPU) or hardware compute device.
6. The method of claim 1 , wherein the request is coordinated via the network infrastructure device based on a capability for the virtual resource pool to meet the QoS requirement.
7. The method of claim 1 , wherein the request is coordinated via the network infrastructure device based on at least one of: attestation of the resource, attestation of the request, or attestation of the client computing system.
8. The method of claim 1 , further comprising:
verifying an attestation of the request for the resource or attestation of the client computing system, using a trusted attestation server.
9. The method of claim 1 , wherein the request is coordinated via the network infrastructure device and provided to the host computing system based on network telemetry evaluated by the network infrastructure device.
10. The method of claim 1 , wherein the network infrastructure device is a switch, gateway, or access point, that is located in a network connecting the host computing system and the client computing system.
11. The method of claim 1 , wherein the method is performed by a networked processing unit at the host computing system, wherein the request for the resource originates from another networked processing unit at the client computing system, and wherein the networked processing unit or the another network processing unit utilize peer discovery logic to associate the respective units.
12. A host computing system, comprising:
a networked processing unit accessible in an edge computing network; and
a storage medium including instructions embodied thereon, wherein the instructions, which when executed by the networked processing unit, configure the networked processing unit to:
determine an availability of a resource at the host computing system;
transmit, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network;
receive a request for the resource in the virtual resource pool, the request provided on behalf of a client computing system, wherein the request is coordinated via the network infrastructure device, and wherein the request includes at least one quality of service (QoS) requirement; and
cause the host computing system to service the request for the resource, using at least the resource at the host computing system, based on the at least one QoS requirement.
13. The computing system of claim 12 , wherein the resource is a physical resource located at the host computing system, wherein the physical resource is accessible via an interconnect, and wherein causing the host computing system to service the request includes providing access to the physical resource.
14. The computing system of claim 13 , wherein the interconnect is a Compute Express Link (CXL) interconnect, and wherein the physical resource is a CXL device that is accessible via a CXL root complex.
15. The computing system of claim 12 , wherein the resource is a virtual resource accessible via at least one local virtual function or at least one remote virtual function, and wherein causing the host computing system to service the request includes providing access to the virtual function.
16. The computing system of claim 15 , wherein the virtual resource corresponds to a central processing unit (CPU) or hardware compute device.
17. The computing system of claim 12 , wherein the request is coordinated via the network infrastructure device based on a capability for the virtual resource pool to meet the QoS requirement.
18. The computing system of claim 12 , wherein the request is coordinated via the network infrastructure device based on at least one of: attestation of the resource, attestation of the request, or attestation of the client computing system.
19. The computing system of claim 12 , wherein the instructions further configure the networked processing unit to:
verify an attestation of the request for the resource or attestation of the client computing system, using a trusted attestation server.
20. The computing system of claim 12 , wherein the request is coordinated via the network infrastructure device and provided to the host computing system based on network telemetry evaluated by the network infrastructure device.
21. The computing system of claim 12 , wherein the network infrastructure device is a switch, gateway, or access point, that is located in a network connecting the host computing system and the client computing system.
22. The computing system of claim 12 , wherein the request for the resource originates from another networked processing unit at the client computing system, and wherein the networked processing unit or the another network processing unit utilize peer discovery logic to associate the respective units.
23. A non-transitory machine-readable storage medium comprising information representative of instructions, wherein the instructions, when executed by processing circuitry of a host computing system, cause the processing circuitry to:
cause the processing circuitry to determine whether a resource at the host computing system is available for use in a virtual resource pool in an edge computing network;
cause the host computing system to transmit, to a network infrastructure device, a notification that the resource at the host computing system is available for use in the virtual resource pool;
cause the host computing system to process a received request for the resource in the virtual resource pool, the request provided on behalf of a client computing system, wherein the request is coordinated via the network infrastructure device, and wherein the request includes at least one quality of service (QoS) requirement; and
cause the processing circuitry to service the request for the resource, based on the at least one QoS requirement.
24. The non-transitory machine-readable storage medium of claim 23 , wherein the resource is:
a physical resource located at the host computing system that is accessible via an interconnect; or
a virtual resource accessible via at least one local virtual function or at least one remote virtual function.
25. The non-transitory machine-readable storage medium of claim 23 , wherein the request is coordinated via the network infrastructure device based on at least one of:
a capability for the virtual resource pool to meet the QoS requirement; or
attestation provided from attestation of the resource, attestation of the request, or attestation of the client computing system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/090,701 US20230136615A1 (en) | 2022-11-16 | 2022-12-29 | Virtual pools and resources using distributed networked processing units |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202263425857P | 2022-11-16 | 2022-11-16 | |
US18/090,701 US20230136615A1 (en) | 2022-11-16 | 2022-12-29 | Virtual pools and resources using distributed networked processing units |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230136615A1 true US20230136615A1 (en) | 2023-05-04 |
Family
ID=86144713
Family Applications (10)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/090,786 Pending US20230132992A1 (en) | 2022-11-16 | 2022-12-29 | Infrastructure-delegated orchestration backup using networked processing units |
US18/090,720 Pending US20230134683A1 (en) | 2022-11-16 | 2022-12-29 | Memory interleaving coordinated by networked processing units |
US18/090,686 Pending US20230136048A1 (en) | 2022-11-16 | 2022-12-29 | Federated distribution of computation and operations using networked processing units |
US18/090,764 Pending US20230135645A1 (en) | 2022-11-16 | 2022-12-29 | Management of workload processing using distributed networked processing units |
US18/090,842 Pending US20230140252A1 (en) | 2022-11-16 | 2022-12-29 | Localized device attestation |
US18/090,862 Pending US20230137879A1 (en) | 2022-11-16 | 2022-12-29 | In-flight incremental processing |
US18/090,749 Pending US20230136612A1 (en) | 2022-11-16 | 2022-12-29 | Optimizing concurrent execution using networked processing units |
US18/090,701 Pending US20230136615A1 (en) | 2022-11-16 | 2022-12-29 | Virtual pools and resources using distributed networked processing units |
US18/090,813 Pending US20230135938A1 (en) | 2022-11-16 | 2022-12-29 | Service mesh switching |
US18/090,653 Pending US20230133020A1 (en) | 2022-11-16 | 2022-12-29 | Accelerator or accelerated functions as a service using networked processing units |
Family Applications Before (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/090,786 Pending US20230132992A1 (en) | 2022-11-16 | 2022-12-29 | Infrastructure-delegated orchestration backup using networked processing units |
US18/090,720 Pending US20230134683A1 (en) | 2022-11-16 | 2022-12-29 | Memory interleaving coordinated by networked processing units |
US18/090,686 Pending US20230136048A1 (en) | 2022-11-16 | 2022-12-29 | Federated distribution of computation and operations using networked processing units |
US18/090,764 Pending US20230135645A1 (en) | 2022-11-16 | 2022-12-29 | Management of workload processing using distributed networked processing units |
US18/090,842 Pending US20230140252A1 (en) | 2022-11-16 | 2022-12-29 | Localized device attestation |
US18/090,862 Pending US20230137879A1 (en) | 2022-11-16 | 2022-12-29 | In-flight incremental processing |
US18/090,749 Pending US20230136612A1 (en) | 2022-11-16 | 2022-12-29 | Optimizing concurrent execution using networked processing units |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/090,813 Pending US20230135938A1 (en) | 2022-11-16 | 2022-12-29 | Service mesh switching |
US18/090,653 Pending US20230133020A1 (en) | 2022-11-16 | 2022-12-29 | Accelerator or accelerated functions as a service using networked processing units |
Country Status (1)
Country | Link |
---|---|
US (10) | US20230132992A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11770377B1 (en) * | 2020-06-29 | 2023-09-26 | Cyral Inc. | Non-in line data monitoring and security services |
CN116208669B (en) * | 2023-04-28 | 2023-06-30 | 湖南大学 | Intelligent lamp pole-based vehicle-mounted heterogeneous network collaborative task unloading method and system |
US12124343B1 (en) * | 2023-07-27 | 2024-10-22 | Dell Products L.P. | High availability management for cloud infrastructure |
-
2022
- 2022-12-29 US US18/090,786 patent/US20230132992A1/en active Pending
- 2022-12-29 US US18/090,720 patent/US20230134683A1/en active Pending
- 2022-12-29 US US18/090,686 patent/US20230136048A1/en active Pending
- 2022-12-29 US US18/090,764 patent/US20230135645A1/en active Pending
- 2022-12-29 US US18/090,842 patent/US20230140252A1/en active Pending
- 2022-12-29 US US18/090,862 patent/US20230137879A1/en active Pending
- 2022-12-29 US US18/090,749 patent/US20230136612A1/en active Pending
- 2022-12-29 US US18/090,701 patent/US20230136615A1/en active Pending
- 2022-12-29 US US18/090,813 patent/US20230135938A1/en active Pending
- 2022-12-29 US US18/090,653 patent/US20230133020A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US20230137879A1 (en) | 2023-05-04 |
US20230136612A1 (en) | 2023-05-04 |
US20230135938A1 (en) | 2023-05-04 |
US20230133020A1 (en) | 2023-05-04 |
US20230134683A1 (en) | 2023-05-04 |
US20230135645A1 (en) | 2023-05-04 |
US20230140252A1 (en) | 2023-05-04 |
US20230132992A1 (en) | 2023-05-04 |
US20230136048A1 (en) | 2023-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11218546B2 (en) | Computer-readable storage medium, an apparatus and a method to select access layer devices to deliver services to clients in an edge computing system | |
JP6552574B2 (en) | Method and apparatus for resource virtualization using virtualization broker and context information | |
US20230136615A1 (en) | Virtual pools and resources using distributed networked processing units | |
US20200136921A1 (en) | Methods, system, articles of manufacture, and apparatus to manage telemetry data in an edge environment | |
EP3716107B1 (en) | Technologies for accelerated orchestration and attestation with edge device trust chains | |
KR20220065670A (en) | Extended peer-to-peer (p2p) with edge networking | |
US20230300075A1 (en) | Methods and apparatus for performance scaling with parallel processing of sliding window management on multi-core architecture | |
US20230236909A1 (en) | Service mesh architecture for integration with accelerator systems | |
US20220114032A1 (en) | Infrastructure managed workload distribution | |
US12074806B2 (en) | Nondominant resource management for edge multi-tenant applications | |
Harnal et al. | Load balancing in fog computing using qos | |
US20220121566A1 (en) | Methods, systems, articles of manufacture and apparatus for network service management | |
US20210117134A1 (en) | Technologies for storage and processing for distributed file systems | |
WO2017052632A1 (en) | Heterogeneous distributed runtime code that shares iot resources | |
CN116339906A (en) | Collaborative management of dynamic edge execution | |
CN116346747A (en) | Method and apparatus for edge computation based on network interface device | |
EP4199426A1 (en) | Methods, systems, articles of manufacture and apparatus to improve mobile edge platform resiliency | |
KR20230001016A (en) | Switch-based adaptive transformation for edge appliances | |
US11899526B2 (en) | Methods, apparatus and articles of manufacture to perform service failover | |
US20220116669A1 (en) | Methods, systems, apparatus, and articles of manufacture to cache media based on service level agreement type | |
WO2023081202A1 (en) | Mec dual edge apr registration on behalf of edge platform in dual edge deployments | |
US20230022409A1 (en) | Methods, systems, articles of manufacture and apparatus to manage a self-adaptive heterogeneous emergency network (shen) | |
US20230188341A1 (en) | Cryptographic operations in edge computing networks | |
US20230342496A1 (en) | Trust brokering and secure information container migration | |
US20240126579A1 (en) | Bare metal as-is session |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTEL CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUIM BERNAT, FRANCESC;KUMAR, KARTHIK;CARRANZA, MARCOS E.;AND OTHERS;SIGNING DATES FROM 20230111 TO 20230208;REEL/FRAME:062660/0620 |
|
STCT | Information on status: administrative procedure adjustment |
Free format text: PROSECUTION SUSPENDED |