US20110023104A1 - System for hosting customized computing clusters - Google Patents
System for hosting customized computing clusters Download PDFInfo
- Publication number
- US20110023104A1 US20110023104A1 US12/894,664 US89466410A US2011023104A1 US 20110023104 A1 US20110023104 A1 US 20110023104A1 US 89466410 A US89466410 A US 89466410A US 2011023104 A1 US2011023104 A1 US 2011023104A1
- Authority
- US
- United States
- Prior art keywords
- cluster
- clusters
- configuration
- computing
- client
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012544 monitoring process Methods 0.000 claims abstract description 57
- 238000004891 communication Methods 0.000 claims abstract description 52
- 230000007246 mechanism Effects 0.000 claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000013500 data storage Methods 0.000 claims abstract description 14
- 230000004044 response Effects 0.000 claims description 4
- 238000000034 method Methods 0.000 description 21
- 238000013461 design Methods 0.000 description 7
- 238000012423 maintenance Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000007726 management method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/04—Network management architectures or arrangements
- H04L41/044—Network management architectures or arrangements comprising hierarchical management structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/02—Reservations, e.g. for tickets, services or events
- G06Q10/025—Coordination of plural reservations, e.g. plural trip segments, transportation combined with accommodation
Definitions
- the present invention relates, in general, to distributed computing and clustered computing environments, and, more particularly, to computer software, hardware, and computer-based methods for hosting a set of computer clusters that are uniquely configured or customized to suit a number of remote customers or clients.
- a growing trend in the field of distributed computing is to use two or more computing resources to perform computing tasks. These grouped resources are often labeled clustered computing environments or computing clusters or simply “clusters.”
- a cluster may include a computer or processors, network or communication links for transferring data among the grouped resources, data storage, and other devices to perform one or more assigned computing processes or tasks.
- the clusters may be configured for high availability, for higher performance, or to suit other functional parameters.
- a portion of a company's data center may be arranged and configured to operate as a cluster to perform one task or support the needs of a division or portion of the company. While a company may benefit from use of a cluster periodically on an ongoing basis, there are a number of reasons why it is often undesirable for a company to own and maintain a cluster.
- High Performance Computing (HPC) clusters are difficult to setup, configure, and manage.
- An HPC cluster also requires numerous resources for ongoing maintenance that increases the cost and manpower associated with cluster ownership.
- HPC clusters or other cluster types
- HPC clusters or other cluster types
- the need for HPC and other cluster types is in part due to the fact that processor speeds have stagnated over the past few years.
- processor speeds have stagnated over the past few years.
- HPC clusters because their problems cannot be solved more rapidly by simply purchasing a faster processor.
- These computer users are placed in the difficult position of weighing the benefits of HPC clusters against the resources consumed by owning such clusters. Decision makers often solve this dilemma by not purchasing clusters, and clusters have remained out of reach of some clients as the resource issues appear insurmountable.
- HPC systems allow a set of computers to work together to solve a single problem.
- the large problem is broken down into smaller independent tasks that are assigned to individual computers in the cluster allowing the large problem to be solved faster.
- Assigning the independent tasks to the computer is often the responsibility of a single node in the cluster designated the master node.
- the responsibilities of the master node include assigning tasks to nodes, keeping track of which nodes are working on which tasks, and consolidating the results from the individual nodes.
- the master node is also responsible for determining if a node fails and assigning the task of the failed node to another node to ensure that node failures are handled transparently. Communication between nodes is accomplished through a message passing mechanism implemented by every member of the cluster. Message passing allows the individual computers to share information about their status on solving their piece of the problem and return results to the master node.
- those who determine a cluster is worth the drain on resources purchase a cluster, host the cluster, and manage it on their premises or on site.
- HPC clusters are not being widely adopted. In part, this is because HPC clusters require the most computers of any cluster type and, thus, cause the most problems with maintenance and management.
- Other types of clusters that have been more widely adopted include the “load balancing cluster” and the “high availability cluster,” but resources are also an issue with these clusters.
- a load balancing cluster is a configuration in which a server sends small individual tasks to a cluster of additional servers when it is overloaded.
- the high availability cluster is a configuration in which a first server watches a second server and if the second server fails, then the first server takes over the function of the second server.
- the multi-cluster subsumes all other classes of clusters because it incorporates multiple clusters to perform tasks.
- the difficulties for managing clusters are amplified when considering multiple clusters because of their complexity. For example, if one HPC cluster consumes a set of resources, then multiple HPC clusters will, of course, consume a much larger set of resources and be even more expensive to maintain.
- One method proposed for managing multiple high availability clusters is described in U.S. Pat. No. 6,438,705, but this method is specific only to the managing of high availability clusters. Further, the described method requires each cluster to have a uniform design. Because it is limited to high availability clusters, the owner would not have an option to incorporate multiple cluster types, such as HPC or load-balancing clusters, within the managed multi-cluster.
- the suggested method does not solve one of the fundamental difficulties associated with cluster usage because it requires the cluster to be owned and operated by the user and to remain on the client's property or site.
- Other discussions of cluster management such as those found in U.S. Pat. No. 6,748,429, U.S. Pat. No. 5,371,852, and U.S. Pat. No. 5,946,463 generally describe a single cluster configuration and do not relate to operating multi-clusters. In all of these cases, the burden of managing, monitoring, and hosting the cluster remains with the user of the cluster who owns the cluster who must maintain the cluster on their premises.
- the present invention provides methods and systems for hosting a plurality of clusters that are each configured for a particular task or computing application presented by a user or client.
- the present invention provides for configuration, access control, and monitoring of multiple customized clusters that are hosted for one or more remote clients.
- system or cluster configuration data may be generated for a cluster based on input from a client or user regarding their computing needs and planned tasks, and this configuration data may be used to configure a cluster particularly for that client.
- the customized cluster is then hosted at a central hosting facility and is made accessible to that client, such as via a public network such as the Internet.
- a computer system or network for hosting computing clusters for clients or customers (such as businesses and organizations that desire a cluster but do not want to own, operate, and maintain one on their premises).
- the system includes a first cluster including a set of computing resources such as processing nodes, data storage, and a private communications network that is arranged or implemented in a first configuration.
- the system also includes a second cluster having a set of computing resources in a second configuration, which differs from the first configuration (e.g., both may be HPC clusters but be configured to handle a different client-assigned or defined task).
- the first configuration provides a first computing environment for performing a first client task while the second configuration provides a second computing environment for performing a second client task (which typically will differ from the first client task).
- the first and second configurations may differ due to configuration of the processing nodes in the clusters, based on configuration of the data storage, based on the private communications network or its connections, or based on software modules provided on the nodes, or based on other hardware or software components and/or configurations.
- the system may further include a monitoring system that monitors the clusters for connectivity and availability or other operational problems on a cluster level and, typically, on a per-node basis (such as with monitors provided for each node) and issues alerts to operations and/or maintenance personnel based on identified issues.
- the system also provides clients or client systems access to the clusters via a public communications network that is linked, such as via a firewall, to a private company network to which the clusters are linked, such as via a gateway mechanism.
- the system is adapted to control access of the clients to the clusters such that a client can only access particular ones of the clusters (e.g., the cluster that has been configured according to their specifications or computing parameters or to perform their computing tasks).
- the firewall mechanism may act to determine which cluster a client is attempting to access and then to determine whether the requesting client has permission or authorization to access that cluster.
- the gateway mechanisms operate, in part, to isolate each cluster such that communications within a cluster such as on the private cluster communications network are separated (e.g., do not have to share bandwidth of a single system network).
- FIG. 1 illustrates a multi-cluster system available prior to the invention
- FIG. 2 is a functional block diagram illustrating a hosted cluster system of one embodiment of the invention
- FIGS. 3A-3C illustrate three representative embodiments of clusters that are configured to provide customization to suit a particular task or computing application (e.g., to meet the particular needs of a requesting customer);
- FIG. 4 illustrates another embodiment of a hosted cluster system of the invention in which dedicated firewall and authentication mechanisms or systems are provided for each cluster;
- FIG. 5 is a flow diagram representing a monitoring process implemented in a hosted cluster system in one embodiment of the invention for monitoring operation of multiple, customized clusters.
- the present invention is directed to methods and systems for hosting multiple clusters or clustered computing environments such that each of the clusters is configured to match or address a particular client or user computing task or problem (e.g., in response to a request for a hosted cluster from a client that identifies their computing and associated requirements).
- the cluster systems of the invention differ from prior clusters in part because they are physically provided at one or more locations that are remote from the processing user or client's facilities (i.e., the computing resources are not owned and operated by the user).
- the client may establish processing parameters that are used to configure a cluster in the system in a manner that suits their needs and, then, access their hosted cluster from a remote location via a communications network such as the Internet or other network.
- the hosted cluster systems and hosting methods of the invention are described herein in relation to three issues associated with hosting multiple customized clusters that were identified by the inventor. Particularly, the systems and methods of the invention address issues associated with arranging the clusters in a consistent and useful manner and of controlling client access to the clusters. Additionally, the systems and methods address issues involved with monitoring the individual cluster components. Examples of solutions to each of these problems are described in the embodiments shown in FIGS. 2-5 .
- the managed and hosted clusters of the various embodiments can be used to give a client control over the design and configuration of a cluster while removing the impediments required by traditional clusters consuming the client's real-estate and requiring nearly constant maintenance.
- the hosting options presented with the hosting methods and hosted cluster systems relieve the client of many burdens and opens up future potential avenues for cluster usage.
- the hosted multi-clusters have the following additional advantages. Providing a hosted cluster to a client does not lock the client into using any one vendor for cluster computing parts because the cluster components can be from any vendor and can be modified and replaced as appropriate to support client needs.
- Hosted clusters allow for easily expandable clusters since each cluster is isolated or is maintained as a standalone unit in communication with a network for communications with a corresponding client and monitoring equipment and/or software modules. It provides for constant monitoring of the cluster because each cluster is hosted and managed.
- clusters that each consist of a primary node 100 and a secondary cluster node 101 connected to a primary storage system 110 and secondary storage system 111 .
- this cluster system design requires each cluster to have a uniform design with like hardware and software.
- the described cluster system limits or even prevents the ability to have multiple cluster types (such as a HPC cluster and a load balancing or high availability cluster) within a single managed multi-cluster.
- the cluster system is also restricted to high availability clusters and not applicable to other cluster types such as HPC or load balancing.
- this system also does not solve the fundamental difficulties associated with prior cluster systems, i.e., the clients are required to host and manage the clusters that are located on their site or in their facilities.
- FIG. 2 one preferred embodiment of a hosted cluster system 200 is illustrated such as it may be provided at a hosting facility typically remote from users or clients (i.e., from their accessing nodes or systems 208 ).
- the system 200 has, or is connected to, a public network 204 (e.g., a wired and/or wireless digital communications network including the Internet, a LAN, a WAN, or the like), which in turn is connected to a firewall and authentication system 210 .
- the authentication system 210 connects to the company network 230 , which has a monitoring system 220 for all the customized clusters 250 , 251 , 252 .
- the company network 230 also has gateways 240 , 241 , 242 , such as routers, to each unique cluster 250 , 251 , 252 .
- gateways 240 , 241 , 242 On the other side of each gateway 240 , 241 , 242 is a private network 300 , 301 , 302 for the individual clusters 250 , 251 , 252 .
- the embodiment shown with system 200 provides efficient separation of the individual cluster network traffic to prevent one cluster from interfering with other clusters.
- the traffic separation is achieved through the gateway 240 , 241 , and/or 242 located between each cluster 250 , 251 , and 252 and the company network 230 .
- Each gateway 240 , 241 , 242 is configured with software and hardware to apply a standard set of rules to only permit traffic destined for its corresponding cluster to pass through from the company network 230 while keeping all cluster traffic internal to the cluster.
- the internal cluster configuration is abstracted from the primary company network 230 allowing the configuration of each cluster to be selected and maintained independently from the other clusters on the network 230 .
- Access control to the individual clusters 250 , 251 , 252 is governed by the firewall and authentication mechanism 210 .
- This mechanism 210 may be implemented with several configurations to achieve the goal of ensuring that clients have access to their cluster, and only to their cluster. Each of these configurations performs two primary steps: (1) ensuring that an incoming connection goes to the correct Cluster and (2) ensuring that the incoming user has access to that cluster (e.g., that a client or customer operating a client node or system 208 attempting a communication or connection with their cluster is directed to the proper one of the clusters 250 , 251 , or 252 and that the system 208 or, more typically, the user of the system 208 has access to that particular cluster 250 , 251 , or 252 ).
- One useful configuration of the system 200 and mechanism 210 is to give each cluster 250 , 251 , 252 its own public address. This enables the firewall portion of mechanism 210 to know that all incoming connections to that specific public address are sent to a node (not shown in FIG. 2 ) on a particular cluster 250 , 251 , 252 . Once the client system 208 is connected to a node on a cluster 250 , 251 , or 252 , that node is then responsible for user authentication to grant access (e.g., a node is provided within each cluster 250 , 251 , 252 that has the proper software and/or hardware to authenticate accessing users).
- Another configuration of the system 200 and mechanism 210 is to have each client 210 connect to a different service on the firewall 210 , such as a TCP/IP port. The firewall 210 will then know which services are for which clusters out of the many clusters 250 , 251 , 252 on the network 230 . It is then able to route the connection to a node on the desired cluster 250 , 251 , or 252 to perform user authentication.
- Another configuration for system 200 and mechanism 210 is for client system 208 to connect to a common service on the firewall 210 and have the firewall 210 authenticate the user.
- This configuration requires the firewall 210 to setup a special user environment on the firewall 210 that will only allow the user of the system 208 to communicate with their cluster 250 , 251 , or 252 and no other clusters. This is accomplished through common virtual machine technology. All of these possible configurations can co-exist together and are not mutually exclusive. Many other configurations exist that provide per-cluster and per-user authentication, and the above-described configurations for the system 200 and mechanism 210 are merely provided as examples.
- each individual cluster 250 , 251 , 252 can have any configuration requested by the client of that cluster.
- companies or organizations may face differing computing challenges and have different needs for a cluster, and the system 200 is intended to represent generally a hosted cluster system 200 in which a plurality of clusters 250 , 251 , 252 are provided for access by client systems 208 via public network 204 (or another network).
- the clusters 250 , 251 , 252 are located remotely from the customer or user's facilities or sites (e.g., the system 200 excluding the client remote systems 208 and all or portions of the network 204 may be located at a hosting facility or facilities) and are not typically owned by the customer or user but instead are provided on an as-needed basis from an operator of the system 200 (such as by leasing use of a cluster 250 , 251 , or 252 ).
- the customer or user is not required to operate and maintain a data center filled with clusters.
- each of the clusters 250 , 251 , 252 is independent and can be configured to suit the needs of the user or customer. For example, each of the cluster users or clients may need a cluster to perform a particular and differing task. Previously, a data center would be provided with clusters of a particular configuration, and the task would be performed by that configured cluster.
- each of the clusters 250 , 251 , 252 may have a differing configuration with such configuration being dynamically established in response to a user or customer's request so as to be better suited to perform their task.
- the task may be handled better with a cluster configuration designed to provide enhanced processing or enhanced data storage.
- the task may best be served with a cluster configured for very low latency or a cluster with increased bandwidth for communications between nodes and/or accessing storage.
- the task parameters and needs of a user are determined as part of personal interview of the customer and/or via data gathered through a data collection screen/interface (not shown) with the system 200 .
- This user input defines the task characteristics or computing parameters, and these are processed manually or with configuration software to select a cluster configuration that matches or suits the customer's needs.
- the selected cluster configuration (or configuration data) is then used to customize one or more of the clusters 250 , 251 , 252 to have a configuration for performing tasks assigned by the customer such as by use of node or system 208 .
- the customer accesses their assigned cluster(s) 250 , 251 , 252 via the public network 204 and authentication and firewall mechanism 210 through use of a client system 208 as discussed above (or, in some cases, by providing computing requests to an operator of the system 200 in physical form for entry via the monitoring system 220 or the like or by digital communications with such an operator).
- FIG. 3A One possible common configuration for the clusters 250 , 251 , 252 of system 200 such as cluster 250 is shown in FIG. 3A with customized cluster 350 , which is linked to the private company network 23 via gateway 240 .
- the cluster 350 is shown as having a plurality of nodes 310 , 311 , 312 that are all connected to a single private communication network 300 for the cluster 350 .
- the cluster 350 also has a dedicated storage node 320 linked to this private network 300 , and storage node is used for common storage or data that is shared between the nodes 310 , 311 , 312 of the cluster 350 .
- Another useful configuration for clusters is shown with customized cluster 351 in FIG. 3B , which modifies the cluster structure of cluster 350 of FIG. 3A .
- the customized cluster 351 may be used for cluster 251 of system 200 to service one of the client systems 208 (or a cluster user that operates one of the systems 208 to access the cluster 251 ).
- the configuration of the cluster 351 as shown involves providing a storage network 340 that is separated from the inter-node communication network 301 .
- FIG. 3C illustrates that a customized cluster 352 , which may be used for cluster 252 , may be configured to include a per-cluster monitoring system or dedicated monitoring system 330 for that specific cluster 352 which reports the state of the cluster 352 to the central monitoring ⁇ system 220 .
- a dedicated monitoring system 330 may also be provided in a cluster customized for a customer as shown in the cluster 350 of FIG. 3A .
- networks 300 , 301 , 302 , 340 may be implemented using a wide variety of digital communications network technologies such as different network types including, but not limited to, Gigabit Ethernet, 10 Gigabit Ethernet, InfinibandTM, or MyrinetTM, with the selection often being based upon the task or computing parameters provided by the cluster user or customer (e.g., need for low latency or bandwidth to access storage or communicate among nodes of a cluster).
- the monitoring system described has two main components: a per-node monitoring system (not shown), such as the Intelligent Platform Management Interface (IPMI), that monitors the hardware and software components on each cluster node and a central monitoring system 220 and 330 that monitors the network connectivity of each node along with verifying that each nodes' per-node monitoring system is functioning.
- IPMI Intelligent Platform Management Interface
- SNMP Simple Network Management Protocol
- the individual cluster configurations requested by the client and implemented in the system 200 do not affect the overall network design due to the isolation of each cluster, i.e., using the gateways 240 , 241 , 242 as described previously. Access to each of the many clusters 250 , 251 , 252 of system 200 is likewise not affected by the individual cluster configurations since primary access is managed through the firewall and authentication mechanism 210 .
- each cluster 250 , 251 , 252 is customized to unique client specifications.
- the customized clusters 250 , 251 , 252 (e.g., with cluster configurations such as shown in FIG. 3A-3C ) are then assembled and connected to a common network 230 private to the hosting company or service provider.
- a client connects with a system 208 to their cluster 250 , 251 , or 252 through the public network 200 , such as the Internet, they are authenticated by the firewall and authentication system 210 , which determines their assigned and customized cluster 250 , 251 , or 252 .
- the gateway 240 , 241 , 242 is responsible for ensuring that network traffic for one cluster does not interfere with network traffic for another cluster.
- the gateway 240 , 241 , 242 also ensures that a client cannot gain access to another client's cluster (e.g., if a client system 208 or a user of system 208 has proper credentials for accessing cluster 252 but not the other clusters 250 , 251 the gateway 242 will act to block the client's cluster 252 from accessing these clusters 250 , 251 ).
- a client Upon being granted access to their cluster, a client is then able to submit (e.g., by operation of a client system 208 or by other methods) processing jobs to the cluster 250 , 251 , or 252 , perform any specific setup for those jobs, and transfer data to and from their cluster 250 , 251 , 252 (e.g., via network 204 , mechanism 210 , private network 230 , and an appropriate gateway 240 , 241 , or 242 ).
- the client can also be given permission to visit the hosting facility and connect directly to the cluster 250 , 251 , 252 , if necessary.
- the client can also perform any other operations on the cluster 250 , 251 , 252 as necessary for running their jobs or maintaining their cluster 250 , 251 , 252 .
- the monitoring system 220 may be implemented with hardware and/or software to perform the monitoring method 500 shown in FIG. 5 .
- the system 220 may be thought of as comprising two primary systems: a per-node monitoring system, such as IPMI, that monitors the hardware and software of the node in which it is provided (i.e., in a step 502 of method 500 ) and a main monitoring system 220 that monitors the network availability of each node and verifies that their per-node monitoring systems are functioning (i.e., as shown with step 505 of method 500 ).
- IPMI per-node monitoring system
- the per-node monitoring system When the per-node monitoring system detects a problem with the node hardware or software at 510 or the main monitoring system 220 (or dedicated system 330 ) detects a problem with node availability or the nodes per-node monitoring system at 515 , they operate to notify the central monitoring system 220 via a mechanism, such as SNMP, of the problem at 520 . When the central monitoring system 220 acknowledges the problem, the per-node or main monitoring system 220 (or 330 ) resumes monitoring their components at 530 .
- the monitoring process 500 typically would be operated on an ongoing manner for a cluster system such as system 200 (e.g., 24 hours a day and 7 days a week).
- the staff of the hosting facility e.g., where the system 200 is located, operated, and maintained
- the staff of the hosting facility is then notified of the problem such as via wired or wireless communication (e.g., via e-mail, paging, or other notification methods).
- the notification may indicate where the problem is physically or functionally located (e.g., which cluster, which node within that cluster, and the like).
- the staff or operator is then responsible for solving the problem and clearing the problem from the central monitoring system at 540 .
- a cluster 250 , 251 , 252 may be configured to have a per-cluster monitoring system 330 , in which case, that system 330 is responsible for monitoring the operations of only that cluster but still sends the information to the central monitoring system 220 .
- the monitoring data is collected from the systems 330 such as via a specific request by the monitoring system 220 for the status of each component or the components periodically send the monitoring system 220 their status. Either mechanism along with many other methods results in an effective monitoring system and process for a hosted cluster system 200 .
- the clients have the option of having additional monitoring components on each node to monitor additional components as requested by the client. Since SNMP is very expandable and configurable, the additionally monitored components easily integrate into the existing system 200 .
- the system design shown in FIG. 2 has a central authentication and firewall system 210 ; however, the authentication and firewall system 210 may be provided on a per-cluster basis, giving each cluster 250 , 251 , 252 its own firewall and authentication system.
- a system 400 may be configured as shown in FIG. 4 .
- the system 400 includes clusters 250 , 251 , 252 , gateways 240 , 241 , 242 between a private company network 230 and monitoring system 220 and the clusters, and private cluster networks 300 , 301 , 302 .
- a firewall and authentication system 410 , 411 , 412 is provided for each cluster 250 , 251 , 252 .
- Each firewall and authentication system 410 , 411 , 412 is configured to only allow a particular client (or clients if more than one client were provided access to a particular customized cluster) to access the corresponding cluster 250 , 251 , 252 .
- the firewall and authentication system 410 , 411 , 412 connects the public network 200 to the private cluster network 300 , 301 , 302 .
- the gateway 240 , 241 , 242 is used to connect the private cluster network 300 , 301 , 302 to the private company network 230 , which has the monitoring system 220 .
- system 200 shown in FIG. 2 shows a single firewall and authentication 210 system.
- a hosted cluster system may include a plurality of these firewall and authentication systems to accommodate many clients (or client systems 208 ) accessing their clusters simultaneously.
- Another embodiment of the hosted cluster systems of the invention is to provide a plurality of monitoring systems such as system 220 , such as when one system is determined to be insufficient to monitor all of the clients cluster components.
- Clusters in the hosted cluster systems are custom built for the clients instead of merely providing a collection of identical clusters. Clients may have unlimited usage of their cluster because their cluster is not shared with any other clients so as to perform multiple tasks or computing applications (although there may be some applications where two clients or users may partner to solve a particular task, which may result in two or more users being able to access a single customized cluster adapted for the partnership's task).
- the hosted cluster systems are adapted to allow expansion to include nearly any number of clusters. The systems described herein prevent one client from accessing another client's cluster as a result of the gateways between each cluster.
- clusters can each have a unique design that is independent from the other clusters due to the arrangement of the communication networks, access devices, and monitoring components. Clients do not need to concern themselves with maintenance and monitoring of their cluster(s). Since the clusters are hosted and configured on an as-needed basis (or for a particular task and/or for a contracted period of time), the hosted cluster systems can be operated so as to make clusters and clustered computing environments available to clients who may not have the resources (e.g., a small or even large business or organization may lack real estate for a computer clusters, lack the needed communications and power infrastructure, and/or lack personnel to perform constant maintenance) to purchase an on-site cluster.
- resources e.g., a small or even large business or organization may lack real estate for a computer clusters, lack the needed communications and power infrastructure, and/or lack personnel to perform constant maintenance
- the client can request clusters of multiple types, such as HPC and load balancing clusters; and the monitoring process can be performed in any order desired.
- Clusters (which may also be called distributed computing systems) may include two or more nodes, which may be employed to perform a computing task.
- a node is a group of circuitry and electronic components designed to perform one or more computing tasks.
- a node may include one or more processors (e.g. Intel XeonTM or AMD OpteronTM), memory, and interface circuitry, or any other additional devices requested by a client.
- a cluster may be defined as a group of two or more nodes that have the capability of exchanging data.
- a particular computing task may be performed upon one node, while other nodes in the cluster perform unrelated computing tasks. Alternatively, portions of a particular computing task may be distributed among the nodes to decrease the time required to perform the computing task as a whole.
- a processor is a device configured to perform an operation upon one or more operands to produce a result. The operations may be performed in response to instructions executed by the processor.
- Clustering software is often implemented on top of an operating system, and such clustering software controls operation of the nodes on the various assigned tasks in a particular manner (e.g., based on the configuration of the hardware and software).
- configuration with regard to a cluster is intended to encompass not only the physical components selected for a cluster and their interconnections with each other in the cluster and the topology of the cluster, but, at least in some cases, configuration also includes configuration of the software running on the computing resources of the cluster which may include any clustering software utilized to manager the cluster.
- Nodes within a cluster may have one or more storage devices coupled to the nodes.
- a storage device is a persistent device capable of storing large amounts of data.
- a storage device may be a magnetic storage device such as a disk device or an optical storage device such as a compact disc device.
- Nodes physically connected to a storage device may access the storage device directly.
- a storage device may be physically connected to one or more nodes of a cluster, but the storage device need not necessarily be physically connected to all the nodes of a cluster.
- a node not physically connected to a storage device may indirectly access the storage device via a data communication link connecting the nodes. Accordingly, a node may have access to one or more local, global, and/or shared storage devices within a cluster.
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Tourism & Hospitality (AREA)
- Operations Research (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Development Economics (AREA)
- Quality & Reliability (AREA)
- Economics (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 11/927,921 filed Oct. 30, 2007, which is incorporated herein by reference in its entirety.
- 1. Field of the Invention
- The present invention relates, in general, to distributed computing and clustered computing environments, and, more particularly, to computer software, hardware, and computer-based methods for hosting a set of computer clusters that are uniquely configured or customized to suit a number of remote customers or clients.
- 2. Relevant Background.
- A growing trend in the field of distributed computing is to use two or more computing resources to perform computing tasks. These grouped resources are often labeled clustered computing environments or computing clusters or simply “clusters.” A cluster may include a computer or processors, network or communication links for transferring data among the grouped resources, data storage, and other devices to perform one or more assigned computing processes or tasks. The clusters may be configured for high availability, for higher performance, or to suit other functional parameters. In a typical arrangement, a portion of a company's data center may be arranged and configured to operate as a cluster to perform one task or support the needs of a division or portion of the company. While a company may benefit from use of a cluster periodically on an ongoing basis, there are a number of reasons why it is often undesirable for a company to own and maintain a cluster.
- As one example, High Performance Computing (HPC) clusters are difficult to setup, configure, and manage. An HPC cluster also requires numerous resources for ongoing maintenance that increases the cost and manpower associated with cluster ownership. Despite these issues, a company may require or at least demand HPC clusters (or other cluster types) to solve large problems that would take an inordinate amount of time to solve with a single computer. The need for HPC and other cluster types is in part due to the fact that processor speeds have stagnated over the past few years. As a result, many companies and other organizations now turn to HPC clusters because their problems cannot be solved more rapidly by simply purchasing a faster processor. These computer users are placed in the difficult position of weighing the benefits of HPC clusters against the resources consumed by owning such clusters. Decision makers often solve this dilemma by not purchasing clusters, and clusters have remained out of reach of some clients as the resource issues appear insurmountable.
- When utilized, HPC systems allow a set of computers to work together to solve a single problem. The large problem is broken down into smaller independent tasks that are assigned to individual computers in the cluster allowing the large problem to be solved faster. Assigning the independent tasks to the computer is often the responsibility of a single node in the cluster designated the master node. The responsibilities of the master node include assigning tasks to nodes, keeping track of which nodes are working on which tasks, and consolidating the results from the individual nodes. The master node is also responsible for determining if a node fails and assigning the task of the failed node to another node to ensure that node failures are handled transparently. Communication between nodes is accomplished through a message passing mechanism implemented by every member of the cluster. Message passing allows the individual computers to share information about their status on solving their piece of the problem and return results to the master node. Currently, those who determine a cluster is worth the drain on resources purchase a cluster, host the cluster, and manage it on their premises or on site.
- Unfortunately, while the number of tasks and computing situations that would benefit from HPC clusters continues to rapidly grow, HPC clusters are not being widely adopted. In part, this is because HPC clusters require the most computers of any cluster type and, thus, cause the most problems with maintenance and management. Other types of clusters that have been more widely adopted include the “load balancing cluster” and the “high availability cluster,” but resources are also an issue with these clusters. A load balancing cluster is a configuration in which a server sends small individual tasks to a cluster of additional servers when it is overloaded. The high availability cluster is a configuration in which a first server watches a second server and if the second server fails, then the first server takes over the function of the second server.
- The multi-cluster subsumes all other classes of clusters because it incorporates multiple clusters to perform tasks. The difficulties for managing clusters are amplified when considering multiple clusters because of their complexity. For example, if one HPC cluster consumes a set of resources, then multiple HPC clusters will, of course, consume a much larger set of resources and be even more expensive to maintain. One method proposed for managing multiple high availability clusters is described in U.S. Pat. No. 6,438,705, but this method is specific only to the managing of high availability clusters. Further, the described method requires each cluster to have a uniform design. Because it is limited to high availability clusters, the owner would not have an option to incorporate multiple cluster types, such as HPC or load-balancing clusters, within the managed multi-cluster. Additionally, the suggested method does not solve one of the fundamental difficulties associated with cluster usage because it requires the cluster to be owned and operated by the user and to remain on the client's property or site. Other discussions of cluster management, such as those found in U.S. Pat. No. 6,748,429, U.S. Pat. No. 5,371,852, and U.S. Pat. No. 5,946,463 generally describe a single cluster configuration and do not relate to operating multi-clusters. In all of these cases, the burden of managing, monitoring, and hosting the cluster remains with the user of the cluster who owns the cluster who must maintain the cluster on their premises.
- Hence, there remains a need for systems and methods for providing clusters to users or “clients” such as companies and other organizations that provide the computational assets or power that the clients demand while not presenting an unacceptable burden on the clients' resources. Preferably, these systems and methods would be effective in providing a cluster that is adapted to suit a particular need or computing task rather than forcing a one-size-fits-all solution upon a cluster user.
- To address the above and other problems, the present invention provides methods and systems for hosting a plurality of clusters that are each configured for a particular task or computing application presented by a user or client. In particular, the present invention provides for configuration, access control, and monitoring of multiple customized clusters that are hosted for one or more remote clients. For example, system or cluster configuration data may be generated for a cluster based on input from a client or user regarding their computing needs and planned tasks, and this configuration data may be used to configure a cluster particularly for that client. The customized cluster is then hosted at a central hosting facility and is made accessible to that client, such as via a public network such as the Internet.
- More particularly, a computer system or network is provided for hosting computing clusters for clients or customers (such as businesses and organizations that desire a cluster but do not want to own, operate, and maintain one on their premises). The system includes a first cluster including a set of computing resources such as processing nodes, data storage, and a private communications network that is arranged or implemented in a first configuration. The system also includes a second cluster having a set of computing resources in a second configuration, which differs from the first configuration (e.g., both may be HPC clusters but be configured to handle a different client-assigned or defined task). The first configuration provides a first computing environment for performing a first client task while the second configuration provides a second computing environment for performing a second client task (which typically will differ from the first client task). The first and second configurations may differ due to configuration of the processing nodes in the clusters, based on configuration of the data storage, based on the private communications network or its connections, or based on software modules provided on the nodes, or based on other hardware or software components and/or configurations.
- The system may further include a monitoring system that monitors the clusters for connectivity and availability or other operational problems on a cluster level and, typically, on a per-node basis (such as with monitors provided for each node) and issues alerts to operations and/or maintenance personnel based on identified issues. The system also provides clients or client systems access to the clusters via a public communications network that is linked, such as via a firewall, to a private company network to which the clusters are linked, such as via a gateway mechanism. The system is adapted to control access of the clients to the clusters such that a client can only access particular ones of the clusters (e.g., the cluster that has been configured according to their specifications or computing parameters or to perform their computing tasks). For example, the firewall mechanism may act to determine which cluster a client is attempting to access and then to determine whether the requesting client has permission or authorization to access that cluster. The gateway mechanisms operate, in part, to isolate each cluster such that communications within a cluster such as on the private cluster communications network are separated (e.g., do not have to share bandwidth of a single system network).
-
FIG. 1 illustrates a multi-cluster system available prior to the invention; -
FIG. 2 is a functional block diagram illustrating a hosted cluster system of one embodiment of the invention; -
FIGS. 3A-3C illustrate three representative embodiments of clusters that are configured to provide customization to suit a particular task or computing application (e.g., to meet the particular needs of a requesting customer); -
FIG. 4 illustrates another embodiment of a hosted cluster system of the invention in which dedicated firewall and authentication mechanisms or systems are provided for each cluster; and -
FIG. 5 is a flow diagram representing a monitoring process implemented in a hosted cluster system in one embodiment of the invention for monitoring operation of multiple, customized clusters. - The present invention is directed to methods and systems for hosting multiple clusters or clustered computing environments such that each of the clusters is configured to match or address a particular client or user computing task or problem (e.g., in response to a request for a hosted cluster from a client that identifies their computing and associated requirements). The cluster systems of the invention differ from prior clusters in part because they are physically provided at one or more locations that are remote from the processing user or client's facilities (i.e., the computing resources are not owned and operated by the user). The client may establish processing parameters that are used to configure a cluster in the system in a manner that suits their needs and, then, access their hosted cluster from a remote location via a communications network such as the Internet or other network.
- The hosted cluster systems and hosting methods of the invention are described herein in relation to three issues associated with hosting multiple customized clusters that were identified by the inventor. Particularly, the systems and methods of the invention address issues associated with arranging the clusters in a consistent and useful manner and of controlling client access to the clusters. Additionally, the systems and methods address issues involved with monitoring the individual cluster components. Examples of solutions to each of these problems are described in the embodiments shown in
FIGS. 2-5 . - It will be clear from the following description that the managed and hosted clusters of the various embodiments can be used to give a client control over the design and configuration of a cluster while removing the impediments required by traditional clusters consuming the client's real-estate and requiring nearly constant maintenance. Additionally, the hosting options presented with the hosting methods and hosted cluster systems relieve the client of many burdens and opens up future potential avenues for cluster usage. Furthermore, the hosted multi-clusters have the following additional advantages. Providing a hosted cluster to a client does not lock the client into using any one vendor for cluster computing parts because the cluster components can be from any vendor and can be modified and replaced as appropriate to support client needs. Hosted clusters allow for easily expandable clusters since each cluster is isolated or is maintained as a standalone unit in communication with a network for communications with a corresponding client and monitoring equipment and/or software modules. It provides for constant monitoring of the cluster because each cluster is hosted and managed.
- Before the invention, the use of multiple cluster systems was known, but these multi-cluster systems were typically limited in ways that hindered their use and adoption. For example, prior multi-cluster computing systems were limited to systems owned and operated by a single user (e.g., to being located upon the owner's facilities), limited to a single configuration such as all clusters being a particular configuration to support a similar processing task, limited to a particular type such as all being high availability, or otherwise limited in their function and/or configuration. For example, one prior multi-cluster system having high availability clusters is described in U.S. Pat. No. 6,438,705 and is illustrated in
FIG. 1 . In this diagram, several clusters are shown that each consist of aprimary node 100 and asecondary cluster node 101 connected to aprimary storage system 110 andsecondary storage system 111. As discussed above, this cluster system design requires each cluster to have a uniform design with like hardware and software. The described cluster system limits or even prevents the ability to have multiple cluster types (such as a HPC cluster and a load balancing or high availability cluster) within a single managed multi-cluster. In the patent description, the cluster system is also restricted to high availability clusters and not applicable to other cluster types such as HPC or load balancing. Significantly, this system also does not solve the fundamental difficulties associated with prior cluster systems, i.e., the clients are required to host and manage the clusters that are located on their site or in their facilities. - In
FIG. 2 , one preferred embodiment of a hostedcluster system 200 is illustrated such as it may be provided at a hosting facility typically remote from users or clients (i.e., from their accessing nodes or systems 208). Thesystem 200 has, or is connected to, a public network 204 (e.g., a wired and/or wireless digital communications network including the Internet, a LAN, a WAN, or the like), which in turn is connected to a firewall andauthentication system 210. Theauthentication system 210 connects to thecompany network 230, which has amonitoring system 220 for all the customizedclusters 250, 251, 252. Thecompany network 230 also hasgateways unique cluster 250, 251, 252. On the other side of eachgateway private network individual clusters 250, 251, 252. - The embodiment shown with
system 200 provides efficient separation of the individual cluster network traffic to prevent one cluster from interfering with other clusters. The traffic separation is achieved through thegateway cluster 250, 251, and 252 and thecompany network 230. Eachgateway company network 230 while keeping all cluster traffic internal to the cluster. With this cluster separation, the internal cluster configuration is abstracted from theprimary company network 230 allowing the configuration of each cluster to be selected and maintained independently from the other clusters on thenetwork 230. By keeping allclusters 250, 251, 252 connected to acommon network 230 through thegateways clusters monitoring system 220 via common network 230). - Access control to the
individual clusters 250, 251, 252 is governed by the firewall andauthentication mechanism 210. Thismechanism 210 may be implemented with several configurations to achieve the goal of ensuring that clients have access to their cluster, and only to their cluster. Each of these configurations performs two primary steps: (1) ensuring that an incoming connection goes to the correct Cluster and (2) ensuring that the incoming user has access to that cluster (e.g., that a client or customer operating a client node orsystem 208 attempting a communication or connection with their cluster is directed to the proper one of theclusters 250, 251, or 252 and that thesystem 208 or, more typically, the user of thesystem 208 has access to thatparticular cluster 250, 251, or 252). - One useful configuration of the
system 200 andmechanism 210 is to give eachcluster 250, 251, 252 its own public address. This enables the firewall portion ofmechanism 210 to know that all incoming connections to that specific public address are sent to a node (not shown inFIG. 2 ) on aparticular cluster 250, 251, 252. Once theclient system 208 is connected to a node on acluster 250, 251, or 252, that node is then responsible for user authentication to grant access (e.g., a node is provided within eachcluster 250, 251, 252 that has the proper software and/or hardware to authenticate accessing users). Another configuration of thesystem 200 andmechanism 210 is to have eachclient 210 connect to a different service on thefirewall 210, such as a TCP/IP port. Thefirewall 210 will then know which services are for which clusters out of themany clusters 250, 251, 252 on thenetwork 230. It is then able to route the connection to a node on the desiredcluster 250, 251, or 252 to perform user authentication. Another configuration forsystem 200 andmechanism 210 is forclient system 208 to connect to a common service on thefirewall 210 and have thefirewall 210 authenticate the user. This configuration requires thefirewall 210 to setup a special user environment on thefirewall 210 that will only allow the user of thesystem 208 to communicate with theircluster 250, 251, or 252 and no other clusters. This is accomplished through common virtual machine technology. All of these possible configurations can co-exist together and are not mutually exclusive. Many other configurations exist that provide per-cluster and per-user authentication, and the above-described configurations for thesystem 200 andmechanism 210 are merely provided as examples. - Significantly, each
individual cluster 250, 251, 252 can have any configuration requested by the client of that cluster. For example, companies or organizations may face differing computing challenges and have different needs for a cluster, and thesystem 200 is intended to represent generally a hostedcluster system 200 in which a plurality ofclusters 250, 251, 252 are provided for access byclient systems 208 via public network 204 (or another network). Hence, theclusters 250, 251, 252 are located remotely from the customer or user's facilities or sites (e.g., thesystem 200 excluding the clientremote systems 208 and all or portions of thenetwork 204 may be located at a hosting facility or facilities) and are not typically owned by the customer or user but instead are provided on an as-needed basis from an operator of the system 200 (such as by leasing use of acluster 250, 251, or 252). As a result the customer or user is not required to operate and maintain a data center filled with clusters. Further, in contrast to prior practice, each of theclusters 250, 251, 252 is independent and can be configured to suit the needs of the user or customer. For example, each of the cluster users or clients may need a cluster to perform a particular and differing task. Previously, a data center would be provided with clusters of a particular configuration, and the task would be performed by that configured cluster. - In contrast, the
system 200 is adapted such that each of theclusters 250, 251, 252 may have a differing configuration with such configuration being dynamically established in response to a user or customer's request so as to be better suited to perform their task. For example, the task may be handled better with a cluster configuration designed to provide enhanced processing or enhanced data storage. In other cases, the task may best be served with a cluster configured for very low latency or a cluster with increased bandwidth for communications between nodes and/or accessing storage. The task parameters and needs of a user are determined as part of personal interview of the customer and/or via data gathered through a data collection screen/interface (not shown) with thesystem 200. This user input defines the task characteristics or computing parameters, and these are processed manually or with configuration software to select a cluster configuration that matches or suits the customer's needs. The selected cluster configuration (or configuration data) is then used to customize one or more of theclusters 250, 251, 252 to have a configuration for performing tasks assigned by the customer such as by use of node orsystem 208. The customer accesses their assigned cluster(s) 250, 251, 252 via thepublic network 204 and authentication andfirewall mechanism 210 through use of aclient system 208 as discussed above (or, in some cases, by providing computing requests to an operator of thesystem 200 in physical form for entry via themonitoring system 220 or the like or by digital communications with such an operator). - One possible common configuration for the
clusters 250, 251, 252 ofsystem 200 such ascluster 250 is shown inFIG. 3A with customizedcluster 350, which is linked to the private company network 23 viagateway 240. Thecluster 350 is shown as having a plurality ofnodes private communication network 300 for thecluster 350. Thecluster 350 also has a dedicatedstorage node 320 linked to thisprivate network 300, and storage node is used for common storage or data that is shared between thenodes cluster 350. Another useful configuration for clusters is shown with customized cluster 351 inFIG. 3B , which modifies the cluster structure ofcluster 350 ofFIG. 3A . The customized cluster 351 may be used for cluster 251 ofsystem 200 to service one of the client systems 208 (or a cluster user that operates one of thesystems 208 to access the cluster 251). The configuration of the cluster 351 as shown involves providing a storage network 340 that is separated from theinter-node communication network 301. -
FIG. 3C illustrates that a customized cluster 352, which may be used for cluster 252, may be configured to include a per-cluster monitoring system ordedicated monitoring system 330 for that specific cluster 352 which reports the state of the cluster 352 to the central monitoring ·system 220. Such adedicated monitoring system 330 may also be provided in a cluster customized for a customer as shown in thecluster 350 ofFIG. 3A . InFIGS. 3A-3C ,networks - The monitoring system described has two main components: a per-node monitoring system (not shown), such as the Intelligent Platform Management Interface (IPMI), that monitors the hardware and software components on each cluster node and a
central monitoring system main monitoring system - The individual cluster configurations requested by the client and implemented in the
system 200 do not affect the overall network design due to the isolation of each cluster, i.e., using thegateways many clusters 250, 251, 252 ofsystem 200 is likewise not affected by the individual cluster configurations since primary access is managed through the firewall andauthentication mechanism 210. - Regarding operation of a hosted cluster system with reference to
FIG. 2 , eachcluster 250, 251, 252 is customized to unique client specifications. The customizedclusters 250, 251, 252 (e.g., with cluster configurations such as shown inFIG. 3A-3C ) are then assembled and connected to acommon network 230 private to the hosting company or service provider. When a client connects with asystem 208 to theircluster 250, 251, or 252 through thepublic network 200, such as the Internet, they are authenticated by the firewall andauthentication system 210, which determines their assigned and customizedcluster 250, 251, or 252. At this point, they are connected to theircluster 250, 251, or 252 through thegateway gateway gateway client system 208 or a user ofsystem 208 has proper credentials for accessing cluster 252 but not theother clusters 250, 251 thegateway 242 will act to block the client's cluster 252 from accessing theseclusters 250, 251). Upon being granted access to their cluster, a client is then able to submit (e.g., by operation of aclient system 208 or by other methods) processing jobs to thecluster 250, 251, or 252, perform any specific setup for those jobs, and transfer data to and from theircluster 250, 251, 252 (e.g., vianetwork 204,mechanism 210,private network 230, and anappropriate gateway cluster 250, 251, 252, if necessary. The client can also perform any other operations on thecluster 250, 251, 252 as necessary for running their jobs or maintaining theircluster 250, 251, 252. - The
monitoring system 220 may be implemented with hardware and/or software to perform themonitoring method 500 shown inFIG. 5 . Functionally thesystem 220 may be thought of as comprising two primary systems: a per-node monitoring system, such as IPMI, that monitors the hardware and software of the node in which it is provided (i.e., in astep 502 of method 500) and amain monitoring system 220 that monitors the network availability of each node and verifies that their per-node monitoring systems are functioning (i.e., as shown with step 505 of method 500). When the per-node monitoring system detects a problem with the node hardware or software at 510 or the main monitoring system 220 (or dedicated system 330) detects a problem with node availability or the nodes per-node monitoring system at 515, they operate to notify thecentral monitoring system 220 via a mechanism, such as SNMP, of the problem at 520. When thecentral monitoring system 220 acknowledges the problem, the per-node or main monitoring system 220 (or 330) resumes monitoring their components at 530. Themonitoring process 500 typically would be operated on an ongoing manner for a cluster system such as system 200 (e.g., 24 hours a day and 7 days a week). - Once the
central monitoring system 220 has acknowledged the problem, the staff of the hosting facility (e.g., where thesystem 200 is located, operated, and maintained) is then notified of the problem such as via wired or wireless communication (e.g., via e-mail, paging, or other notification methods). The notification may indicate where the problem is physically or functionally located (e.g., which cluster, which node within that cluster, and the like). The staff or operator is then responsible for solving the problem and clearing the problem from the central monitoring system at 540. Acluster 250, 251, 252 may be configured to have a per-cluster monitoring system 330, in which case, thatsystem 330 is responsible for monitoring the operations of only that cluster but still sends the information to thecentral monitoring system 220. The monitoring data is collected from thesystems 330 such as via a specific request by themonitoring system 220 for the status of each component or the components periodically send themonitoring system 220 their status. Either mechanism along with many other methods results in an effective monitoring system and process for a hostedcluster system 200. The clients have the option of having additional monitoring components on each node to monitor additional components as requested by the client. Since SNMP is very expandable and configurable, the additionally monitored components easily integrate into the existingsystem 200. - Numerous cluster arrangements and embodiments are possible given these components. The system design shown in
FIG. 2 has a central authentication andfirewall system 210; however, the authentication andfirewall system 210 may be provided on a per-cluster basis, giving eachcluster 250, 251, 252 its own firewall and authentication system. Such asystem 400 may be configured as shown inFIG. 4 . Thesystem 400 includesclusters 250, 251, 252,gateways private company network 230 andmonitoring system 220 and the clusters, andprivate cluster networks system 400, a firewall andauthentication system cluster 250, 251, 252. Each firewall andauthentication system corresponding cluster 250, 251, 252. In such a configuration, the firewall andauthentication system public network 200 to theprivate cluster network gateway private cluster network private company network 230, which has themonitoring system 220. - The embodiment of
system 200 shown inFIG. 2 shows a single firewall andauthentication 210 system. Another embodiment of a hosted cluster system, though, may include a plurality of these firewall and authentication systems to accommodate many clients (or client systems 208) accessing their clusters simultaneously. Another embodiment of the hosted cluster systems of the invention is to provide a plurality of monitoring systems such assystem 220, such as when one system is determined to be insufficient to monitor all of the clients cluster components. - From the description of this system, a number of advantages of hosting clusters over traditional multi-cluster arrangements will be apparent to those skilled in the art. Clusters in the hosted cluster systems are custom built for the clients instead of merely providing a collection of identical clusters. Clients may have unlimited usage of their cluster because their cluster is not shared with any other clients so as to perform multiple tasks or computing applications (although there may be some applications where two clients or users may partner to solve a particular task, which may result in two or more users being able to access a single customized cluster adapted for the partnership's task). The hosted cluster systems are adapted to allow expansion to include nearly any number of clusters. The systems described herein prevent one client from accessing another client's cluster as a result of the gateways between each cluster. In the hosted cluster systems, clusters can each have a unique design that is independent from the other clusters due to the arrangement of the communication networks, access devices, and monitoring components. Clients do not need to concern themselves with maintenance and monitoring of their cluster(s). Since the clusters are hosted and configured on an as-needed basis (or for a particular task and/or for a contracted period of time), the hosted cluster systems can be operated so as to make clusters and clustered computing environments available to clients who may not have the resources (e.g., a small or even large business or organization may lack real estate for a computer clusters, lack the needed communications and power infrastructure, and/or lack personnel to perform constant maintenance) to purchase an on-site cluster.
- Although the invention has been described and illustrated with a certain degree of particularity, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the combination and arrangement of parts can be resorted to by those skilled in the art without departing from the spirit and scope of the invention, as hereinafter claimed. For example, the client can request clusters of multiple types, such as HPC and load balancing clusters; and the monitoring process can be performed in any order desired.
- The above description is not considered limited to a particular type of cluster or to particular hardware and/or software components used to form a computing cluster. Further, the communication devices and networks may be varied to practice the invention. However, it may be useful at this point to provide further discussion of the components that may be used to implement hosted cluster systems and networks of the present invention. Clusters (which may also be called distributed computing systems) may include two or more nodes, which may be employed to perform a computing task. A node is a group of circuitry and electronic components designed to perform one or more computing tasks. A node may include one or more processors (e.g. Intel Xeon™ or AMD Opteron™), memory, and interface circuitry, or any other additional devices requested by a client. A cluster may be defined as a group of two or more nodes that have the capability of exchanging data. A particular computing task may be performed upon one node, while other nodes in the cluster perform unrelated computing tasks. Alternatively, portions of a particular computing task may be distributed among the nodes to decrease the time required to perform the computing task as a whole. A processor is a device configured to perform an operation upon one or more operands to produce a result. The operations may be performed in response to instructions executed by the processor. Clustering software is often implemented on top of an operating system, and such clustering software controls operation of the nodes on the various assigned tasks in a particular manner (e.g., based on the configuration of the hardware and software). The use of the term “configuration” with regard to a cluster is intended to encompass not only the physical components selected for a cluster and their interconnections with each other in the cluster and the topology of the cluster, but, at least in some cases, configuration also includes configuration of the software running on the computing resources of the cluster which may include any clustering software utilized to manager the cluster.
- Nodes within a cluster may have one or more storage devices coupled to the nodes. A storage device is a persistent device capable of storing large amounts of data. For example, a storage device may be a magnetic storage device such as a disk device or an optical storage device such as a compact disc device. Nodes physically connected to a storage device may access the storage device directly. A storage device may be physically connected to one or more nodes of a cluster, but the storage device need not necessarily be physically connected to all the nodes of a cluster. In some clusters, a node not physically connected to a storage device may indirectly access the storage device via a data communication link connecting the nodes. Accordingly, a node may have access to one or more local, global, and/or shared storage devices within a cluster.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/894,664 US8352584B2 (en) | 2007-10-30 | 2010-09-30 | System for hosting customized computing clusters |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/927,921 US7822841B2 (en) | 2007-10-30 | 2007-10-30 | Method and system for hosting multiple, customized computing clusters |
US12/894,664 US8352584B2 (en) | 2007-10-30 | 2010-09-30 | System for hosting customized computing clusters |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/927,921 Continuation US7822841B2 (en) | 2007-10-30 | 2007-10-30 | Method and system for hosting multiple, customized computing clusters |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110023104A1 true US20110023104A1 (en) | 2011-01-27 |
US8352584B2 US8352584B2 (en) | 2013-01-08 |
Family
ID=40584334
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/927,921 Active 2028-10-28 US7822841B2 (en) | 2007-10-30 | 2007-10-30 | Method and system for hosting multiple, customized computing clusters |
US12/894,664 Active US8352584B2 (en) | 2007-10-30 | 2010-09-30 | System for hosting customized computing clusters |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/927,921 Active 2028-10-28 US7822841B2 (en) | 2007-10-30 | 2007-10-30 | Method and system for hosting multiple, customized computing clusters |
Country Status (2)
Country | Link |
---|---|
US (2) | US7822841B2 (en) |
WO (1) | WO2009058642A2 (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100281095A1 (en) * | 2009-04-21 | 2010-11-04 | Wehner Camille B | Mobile grid computing |
WO2012177359A3 (en) * | 2011-06-21 | 2013-02-28 | Intel Corporation | Native cloud computing via network segmentation |
US20130152191A1 (en) * | 2011-12-13 | 2013-06-13 | David Andrew Bright | Timing management in a large firewall cluster |
US20140282813A1 (en) * | 2013-03-12 | 2014-09-18 | Red Hat Israel, Ltd. | Secured logical component for security in a virtual environment |
US9008079B2 (en) | 2009-10-30 | 2015-04-14 | Iii Holdings 2, Llc | System and method for high-performance, low-power data center interconnect fabric |
US9054990B2 (en) | 2009-10-30 | 2015-06-09 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging server SOCs or server fabrics |
US9069929B2 (en) | 2011-10-31 | 2015-06-30 | Iii Holdings 2, Llc | Arbitrating usage of serial port in node card of scalable and modular servers |
US9077654B2 (en) | 2009-10-30 | 2015-07-07 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging managed server SOCs |
US9311269B2 (en) | 2009-10-30 | 2016-04-12 | Iii Holdings 2, Llc | Network proxy for high-performance, low-power data center interconnect fabric |
US20160224277A1 (en) * | 2015-02-03 | 2016-08-04 | Netapp, Inc. | Monitoring storage cluster elements |
US9465771B2 (en) | 2009-09-24 | 2016-10-11 | Iii Holdings 2, Llc | Server on a chip and node cards comprising one or more of same |
US9585281B2 (en) | 2011-10-28 | 2017-02-28 | Iii Holdings 2, Llc | System and method for flexible storage and networking provisioning in large scalable processor installations |
US9648102B1 (en) | 2012-12-27 | 2017-05-09 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9680770B2 (en) | 2009-10-30 | 2017-06-13 | Iii Holdings 2, Llc | System and method for using a multi-protocol fabric module across a distributed server interconnect fabric |
US9876735B2 (en) | 2009-10-30 | 2018-01-23 | Iii Holdings 2, Llc | Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect |
US10140245B2 (en) | 2009-10-30 | 2018-11-27 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US20190188208A1 (en) * | 2014-09-30 | 2019-06-20 | Splunk Inc. | Sharing configuration information through a shared storage location |
US10659523B1 (en) | 2014-05-23 | 2020-05-19 | Amazon Technologies, Inc. | Isolating compute clusters created for a customer |
US10877695B2 (en) | 2009-10-30 | 2020-12-29 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US20220224602A1 (en) * | 2019-04-25 | 2022-07-14 | Juniper Networks, Inc. | Multi-cluster configuration controller for software defined networks |
US11436268B2 (en) | 2014-09-30 | 2022-09-06 | Splunk Inc. | Multi-site cluster-based data intake and query systems |
US11467883B2 (en) | 2004-03-13 | 2022-10-11 | Iii Holdings 12, Llc | Co-allocating a reservation spanning different compute resources types |
US11494235B2 (en) | 2004-11-08 | 2022-11-08 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11496415B2 (en) | 2005-04-07 | 2022-11-08 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11522952B2 (en) | 2007-09-24 | 2022-12-06 | The Research Foundation For The State University Of New York | Automatic clustering for self-organizing grids |
US11630704B2 (en) | 2004-08-20 | 2023-04-18 | Iii Holdings 12, Llc | System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information |
US11652706B2 (en) | 2004-06-18 | 2023-05-16 | Iii Holdings 12, Llc | System and method for providing dynamic provisioning within a compute environment |
US11650857B2 (en) | 2006-03-16 | 2023-05-16 | Iii Holdings 12, Llc | System and method for managing a hybrid computer environment |
US11658916B2 (en) | 2005-03-16 | 2023-05-23 | Iii Holdings 12, Llc | Simple integration of an on-demand compute environment |
US11720290B2 (en) | 2009-10-30 | 2023-08-08 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US11789961B2 (en) | 2014-09-30 | 2023-10-17 | Splunk Inc. | Interaction with particular event for field selection |
US11960937B2 (en) | 2004-03-13 | 2024-04-16 | Iii Holdings 12, Llc | System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter |
US12120040B2 (en) | 2005-03-16 | 2024-10-15 | Iii Holdings 12, Llc | On-demand compute environment |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7822841B2 (en) * | 2007-10-30 | 2010-10-26 | Modern Grids, Inc. | Method and system for hosting multiple, customized computing clusters |
US8108712B1 (en) * | 2009-10-30 | 2012-01-31 | Hewlett-Packard Development Company, L.P. | Method and apparatus for removing a computer from a computer cluster observing failure |
US8402139B2 (en) * | 2010-02-26 | 2013-03-19 | Red Hat, Inc. | Methods and systems for matching resource requests with cloud computing environments |
US8775626B2 (en) | 2010-09-17 | 2014-07-08 | Microsoft Corporation | Using templates to configure cloud resources |
US8832818B2 (en) | 2011-02-28 | 2014-09-09 | Rackspace Us, Inc. | Automated hybrid connections between multiple environments in a data center |
CN102169448B (en) * | 2011-03-18 | 2013-10-23 | 浪潮电子信息产业股份有限公司 | Deployment method of cluster parallel computing environment |
US8887263B2 (en) | 2011-09-08 | 2014-11-11 | Mcafee, Inc. | Authentication sharing in a firewall cluster |
US8763106B2 (en) | 2011-09-08 | 2014-06-24 | Mcafee, Inc. | Application state sharing in a firewall cluster |
US8725798B2 (en) | 2011-12-15 | 2014-05-13 | Microsoft Corporation | Provisioning high performance computing clusters |
US9106663B2 (en) * | 2012-02-01 | 2015-08-11 | Comcast Cable Communications, Llc | Latency-based routing and load balancing in a network |
US10452284B2 (en) * | 2013-02-05 | 2019-10-22 | International Business Machines Corporation | Storage system based host computer monitoring |
US9577892B2 (en) * | 2013-04-06 | 2017-02-21 | Citrix Systems, Inc. | Systems and methods for providing monitoring in a cluster system |
US9843624B1 (en) | 2013-06-13 | 2017-12-12 | Pouya Taaghol | Distributed software defined networking |
CN107547447B (en) * | 2017-08-31 | 2021-06-29 | 郑州云海信息技术有限公司 | Network communication method and device of distributed file system and network communication system |
US10911342B2 (en) | 2018-11-30 | 2021-02-02 | Sap Se | Distributed monitoring in clusters with self-healing |
US11249874B2 (en) * | 2019-03-20 | 2022-02-15 | Salesforce.Com, Inc. | Content-sensitive container scheduling on clusters |
US11956334B2 (en) | 2021-03-15 | 2024-04-09 | Hewlett Packard Enterprise Development Lp | Visualizing cluster node statuses |
Citations (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4731860A (en) * | 1985-06-19 | 1988-03-15 | International Business Machines Corporation | Method for identifying three-dimensional objects using two-dimensional images |
US4837831A (en) * | 1986-10-15 | 1989-06-06 | Dragon Systems, Inc. | Method for creating and using multiple-word sound models in speech recognition |
US5079765A (en) * | 1989-01-09 | 1992-01-07 | Canon Kabushiki Kaisha | Network system having a gateway apparatus for momitoring a local area network |
US5185860A (en) * | 1990-05-03 | 1993-02-09 | Hewlett-Packard Company | Automatic discovery of network elements |
US5224205A (en) * | 1990-05-21 | 1993-06-29 | International Business Machines Corp. | Method of combining architecturally dissimilar computing networks into a single logical network |
US5371852A (en) * | 1992-10-14 | 1994-12-06 | International Business Machines Corporation | Method and apparatus for making a cluster of computers appear as a single host on a network |
US5649141A (en) * | 1994-06-30 | 1997-07-15 | Nec Corporation | Multiprocessor system for locally managing address translation table |
US5694615A (en) * | 1995-06-26 | 1997-12-02 | Hewlett Packard Company | Storage system having storage units interconnected to form multiple loops to provide simultaneous access from multiple hosts |
US5774650A (en) * | 1993-09-03 | 1998-06-30 | International Business Machines Corporation | Control of access to a networked system |
US5822531A (en) * | 1996-07-22 | 1998-10-13 | International Business Machines Corporation | Method and system for dynamically reconfiguring a cluster of computer systems |
US5890007A (en) * | 1995-02-28 | 1999-03-30 | Nec Corporation | Multi-cluster parallel processing computer system |
US5946463A (en) * | 1996-07-22 | 1999-08-31 | International Business Machines Corporation | Method and system for automatically performing an operation on multiple computer systems within a cluster |
US6088727A (en) * | 1996-10-28 | 2000-07-11 | Mitsubishi Denki Kabushiki Kaisha | Cluster controlling system operating on a plurality of computers in a cluster system |
US6363495B1 (en) * | 1999-01-19 | 2002-03-26 | International Business Machines Corporation | Method and apparatus for partition resolution in clustered computer systems |
US6427209B1 (en) * | 1999-10-19 | 2002-07-30 | Microsoft Corporation | System and method of user logon in combination with user authentication for network access |
US6438705B1 (en) * | 1999-01-29 | 2002-08-20 | International Business Machines Corporation | Method and apparatus for building and managing multi-clustered computer systems |
US6748429B1 (en) * | 2000-01-10 | 2004-06-08 | Sun Microsystems, Inc. | Method to dynamically change cluster or distributed system configuration |
US6779039B1 (en) * | 2000-03-31 | 2004-08-17 | Avaya Technology Corp. | System and method for routing message traffic using a cluster of routers sharing a single logical IP address distinct from unique IP addresses of the routers |
US6823452B1 (en) * | 1999-12-17 | 2004-11-23 | International Business Machines Corporation | Providing end-to-end user authentication for host access using digital certificates |
US6826568B2 (en) * | 2001-12-20 | 2004-11-30 | Microsoft Corporation | Methods and system for model matching |
US6854069B2 (en) * | 2000-05-02 | 2005-02-08 | Sun Microsystems Inc. | Method and system for achieving high availability in a networked computer system |
US20050039180A1 (en) * | 2003-08-11 | 2005-02-17 | Scalemp Inc. | Cluster-based operating system-agnostic virtual computing system |
US20050060391A1 (en) * | 2003-09-16 | 2005-03-17 | International Business Machines Corporation | Autonomic cluster-based optimization |
US20050108518A1 (en) * | 2003-06-10 | 2005-05-19 | Pandya Ashish A. | Runtime adaptable security processor |
US20050108425A1 (en) * | 2003-11-14 | 2005-05-19 | Alcatel | Software configurable cluster-based router using heterogeneous nodes as cluster nodes |
US20050159927A1 (en) * | 2004-01-20 | 2005-07-21 | International Business Machines Corporation | Remote enterprise management of high availability systems |
US20050172161A1 (en) * | 2004-01-20 | 2005-08-04 | International Business Machines Corporation | Managing failover of J2EE compliant middleware in a high availability system |
US20050228906A1 (en) * | 2003-05-14 | 2005-10-13 | Fujitsu Limited | Delay storage device and delay treating method |
US20050235055A1 (en) * | 2004-04-15 | 2005-10-20 | Raytheon Company | Graphical user interface for managing HPC clusters |
US20050251567A1 (en) * | 2004-04-15 | 2005-11-10 | Raytheon Company | System and method for cluster management based on HPC architecture |
US6990602B1 (en) * | 2001-08-23 | 2006-01-24 | Unisys Corporation | Method for diagnosing hardware configuration in a clustered system |
US20060080323A1 (en) * | 2004-09-30 | 2006-04-13 | Wong Ryan H Y | Apparatus and method for report publication in a federated cluster |
US20060085785A1 (en) * | 2004-10-15 | 2006-04-20 | Emc Corporation | Method and apparatus for configuring, monitoring and/or managing resource groups including a virtual machine |
US7035858B2 (en) * | 2002-04-29 | 2006-04-25 | Sun Microsystems, Inc. | System and method dynamic cluster membership in a distributed data system |
US20060143350A1 (en) * | 2003-12-30 | 2006-06-29 | 3Tera, Inc. | Apparatus, method and system for aggregrating computing resources |
US20060190602A1 (en) * | 2005-02-23 | 2006-08-24 | At&T Corp. | Monitoring for replica placement and request distribution |
US20060212332A1 (en) * | 2005-03-16 | 2006-09-21 | Cluster Resources, Inc. | Simple integration of on-demand compute environment |
US20060212234A1 (en) * | 2003-07-16 | 2006-09-21 | Wendelin Egli | Modular data recording and display unit |
US20060230149A1 (en) * | 2005-04-07 | 2006-10-12 | Cluster Resources, Inc. | On-Demand Access to Compute Resources |
US20060248371A1 (en) * | 2005-04-28 | 2006-11-02 | International Business Machines Corporation | Method and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center |
US20070028244A1 (en) * | 2003-10-08 | 2007-02-01 | Landis John A | Computer system para-virtualization using a hypervisor that is implemented in a partition of the host system |
US7185076B1 (en) * | 2000-05-31 | 2007-02-27 | International Business Machines Corporation | Method, system and program products for managing a clustered computing environment |
US7188171B2 (en) * | 2003-01-23 | 2007-03-06 | Hewlett-Packard Development Company, L.P. | Method and apparatus for software and hardware event monitoring and repair |
US20070061441A1 (en) * | 2003-10-08 | 2007-03-15 | Landis John A | Para-virtualized computer system with I/0 server partitions that map physical host hardware for access by guest partitions |
US20070067481A1 (en) * | 2005-08-23 | 2007-03-22 | Viswa Sharma | Omni-protocol engine for reconfigurable bit-stream processing in high-speed networks |
US7203864B2 (en) * | 2004-06-25 | 2007-04-10 | Hewlett-Packard Development Company, L.P. | Method and system for clustering computers into peer groups and comparing individual computers to their peers |
US20070156677A1 (en) * | 1999-07-21 | 2007-07-05 | Alberti Anemometer Llc | Database access system |
US20070156813A1 (en) * | 2005-11-15 | 2007-07-05 | California Institute Of Technology | Method and apparatus for collaborative system |
US7243368B2 (en) * | 2002-03-29 | 2007-07-10 | Hewlett-Packard Development Company, L.P. | Access control system and method for a networked computer system |
US7269762B2 (en) * | 2002-05-29 | 2007-09-11 | Robert Bosch Gmbh | Method for mutual monitoring of components of a distributed computer system |
US20070220152A1 (en) * | 2004-03-13 | 2007-09-20 | Jackson David B | System and method for providing advanced reservations in a compute environment |
US20080043769A1 (en) * | 2006-08-16 | 2008-02-21 | Tyan Computer Corporation | Clustering system and system management architecture thereof |
US20080070550A1 (en) * | 2006-09-20 | 2008-03-20 | Hose David A | Providing Subscriber Specific Information Across Wireless Networks |
US20080086524A1 (en) * | 2006-08-18 | 2008-04-10 | Akamai Technologies, Inc. | Method and system for identifying valid users operating across a distributed network |
US20080086523A1 (en) * | 2006-08-18 | 2008-04-10 | Akamai Technologies, Inc. | Method of data collection in a distributed network |
US20080092058A1 (en) * | 2006-08-18 | 2008-04-17 | Akamai Technologies, Inc. | Method of data collection among participating content providers in a distributed network |
US20080120403A1 (en) * | 2006-11-22 | 2008-05-22 | Dell Products L.P. | Systems and Methods for Provisioning Homogeneous Servers |
US20080177690A1 (en) * | 2006-01-19 | 2008-07-24 | Mhave, Llc. | Rules Engine for Enterprise System |
US20080216081A1 (en) * | 2005-03-11 | 2008-09-04 | Cluster Resources, Inc. | System and Method For Enforcing Future Policies in a Compute Environment |
US20090019535A1 (en) * | 2007-07-10 | 2009-01-15 | Ragingwire Enterprise Solutions, Inc. | Method and remote system for creating a customized server infrastructure in real time |
US7822841B2 (en) * | 2007-10-30 | 2010-10-26 | Modern Grids, Inc. | Method and system for hosting multiple, customized computing clusters |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2413029A1 (en) * | 2000-06-14 | 2001-12-27 | Coreexpress, Inc. | Internet route deaggregation and route selection preferencing |
-
2007
- 2007-10-30 US US11/927,921 patent/US7822841B2/en active Active
-
2008
- 2008-10-23 WO PCT/US2008/080876 patent/WO2009058642A2/en active Application Filing
-
2010
- 2010-09-30 US US12/894,664 patent/US8352584B2/en active Active
Patent Citations (67)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4731860A (en) * | 1985-06-19 | 1988-03-15 | International Business Machines Corporation | Method for identifying three-dimensional objects using two-dimensional images |
US4837831A (en) * | 1986-10-15 | 1989-06-06 | Dragon Systems, Inc. | Method for creating and using multiple-word sound models in speech recognition |
US5079765A (en) * | 1989-01-09 | 1992-01-07 | Canon Kabushiki Kaisha | Network system having a gateway apparatus for momitoring a local area network |
US5185860A (en) * | 1990-05-03 | 1993-02-09 | Hewlett-Packard Company | Automatic discovery of network elements |
US5224205A (en) * | 1990-05-21 | 1993-06-29 | International Business Machines Corp. | Method of combining architecturally dissimilar computing networks into a single logical network |
US5371852A (en) * | 1992-10-14 | 1994-12-06 | International Business Machines Corporation | Method and apparatus for making a cluster of computers appear as a single host on a network |
US5774650A (en) * | 1993-09-03 | 1998-06-30 | International Business Machines Corporation | Control of access to a networked system |
US5649141A (en) * | 1994-06-30 | 1997-07-15 | Nec Corporation | Multiprocessor system for locally managing address translation table |
US5890007A (en) * | 1995-02-28 | 1999-03-30 | Nec Corporation | Multi-cluster parallel processing computer system |
US5694615A (en) * | 1995-06-26 | 1997-12-02 | Hewlett Packard Company | Storage system having storage units interconnected to form multiple loops to provide simultaneous access from multiple hosts |
US5822531A (en) * | 1996-07-22 | 1998-10-13 | International Business Machines Corporation | Method and system for dynamically reconfiguring a cluster of computer systems |
US5946463A (en) * | 1996-07-22 | 1999-08-31 | International Business Machines Corporation | Method and system for automatically performing an operation on multiple computer systems within a cluster |
US6088727A (en) * | 1996-10-28 | 2000-07-11 | Mitsubishi Denki Kabushiki Kaisha | Cluster controlling system operating on a plurality of computers in a cluster system |
US6363495B1 (en) * | 1999-01-19 | 2002-03-26 | International Business Machines Corporation | Method and apparatus for partition resolution in clustered computer systems |
US6438705B1 (en) * | 1999-01-29 | 2002-08-20 | International Business Machines Corporation | Method and apparatus for building and managing multi-clustered computer systems |
US20070156677A1 (en) * | 1999-07-21 | 2007-07-05 | Alberti Anemometer Llc | Database access system |
US6427209B1 (en) * | 1999-10-19 | 2002-07-30 | Microsoft Corporation | System and method of user logon in combination with user authentication for network access |
US6823452B1 (en) * | 1999-12-17 | 2004-11-23 | International Business Machines Corporation | Providing end-to-end user authentication for host access using digital certificates |
US6748429B1 (en) * | 2000-01-10 | 2004-06-08 | Sun Microsystems, Inc. | Method to dynamically change cluster or distributed system configuration |
US6779039B1 (en) * | 2000-03-31 | 2004-08-17 | Avaya Technology Corp. | System and method for routing message traffic using a cluster of routers sharing a single logical IP address distinct from unique IP addresses of the routers |
US6854069B2 (en) * | 2000-05-02 | 2005-02-08 | Sun Microsystems Inc. | Method and system for achieving high availability in a networked computer system |
US7185076B1 (en) * | 2000-05-31 | 2007-02-27 | International Business Machines Corporation | Method, system and program products for managing a clustered computing environment |
US6990602B1 (en) * | 2001-08-23 | 2006-01-24 | Unisys Corporation | Method for diagnosing hardware configuration in a clustered system |
US6826568B2 (en) * | 2001-12-20 | 2004-11-30 | Microsoft Corporation | Methods and system for model matching |
US7243368B2 (en) * | 2002-03-29 | 2007-07-10 | Hewlett-Packard Development Company, L.P. | Access control system and method for a networked computer system |
US7035858B2 (en) * | 2002-04-29 | 2006-04-25 | Sun Microsystems, Inc. | System and method dynamic cluster membership in a distributed data system |
US7269762B2 (en) * | 2002-05-29 | 2007-09-11 | Robert Bosch Gmbh | Method for mutual monitoring of components of a distributed computer system |
US7188171B2 (en) * | 2003-01-23 | 2007-03-06 | Hewlett-Packard Development Company, L.P. | Method and apparatus for software and hardware event monitoring and repair |
US20050228906A1 (en) * | 2003-05-14 | 2005-10-13 | Fujitsu Limited | Delay storage device and delay treating method |
US20050108518A1 (en) * | 2003-06-10 | 2005-05-19 | Pandya Ashish A. | Runtime adaptable security processor |
US20060212234A1 (en) * | 2003-07-16 | 2006-09-21 | Wendelin Egli | Modular data recording and display unit |
US20050039180A1 (en) * | 2003-08-11 | 2005-02-17 | Scalemp Inc. | Cluster-based operating system-agnostic virtual computing system |
US20050060391A1 (en) * | 2003-09-16 | 2005-03-17 | International Business Machines Corporation | Autonomic cluster-based optimization |
US20070067435A1 (en) * | 2003-10-08 | 2007-03-22 | Landis John A | Virtual data center that allocates and manages system resources across multiple nodes |
US20070061441A1 (en) * | 2003-10-08 | 2007-03-15 | Landis John A | Para-virtualized computer system with I/0 server partitions that map physical host hardware for access by guest partitions |
US20070028244A1 (en) * | 2003-10-08 | 2007-02-01 | Landis John A | Computer system para-virtualization using a hypervisor that is implemented in a partition of the host system |
US20050108425A1 (en) * | 2003-11-14 | 2005-05-19 | Alcatel | Software configurable cluster-based router using heterogeneous nodes as cluster nodes |
US20060143350A1 (en) * | 2003-12-30 | 2006-06-29 | 3Tera, Inc. | Apparatus, method and system for aggregrating computing resources |
US20070245167A1 (en) * | 2004-01-20 | 2007-10-18 | International Business Machines Corporation | Managing failover of j2ee compliant middleware in a high availability system |
US7246256B2 (en) * | 2004-01-20 | 2007-07-17 | International Business Machines Corporation | Managing failover of J2EE compliant middleware in a high availability system |
US7634683B2 (en) * | 2004-01-20 | 2009-12-15 | International Business Machines Corporation | Managing failover of J2EE compliant middleware in a high availability system |
US6996502B2 (en) * | 2004-01-20 | 2006-02-07 | International Business Machines Corporation | Remote enterprise management of high availability systems |
US20050172161A1 (en) * | 2004-01-20 | 2005-08-04 | International Business Machines Corporation | Managing failover of J2EE compliant middleware in a high availability system |
US20050159927A1 (en) * | 2004-01-20 | 2005-07-21 | International Business Machines Corporation | Remote enterprise management of high availability systems |
US20070220152A1 (en) * | 2004-03-13 | 2007-09-20 | Jackson David B | System and method for providing advanced reservations in a compute environment |
US20100023949A1 (en) * | 2004-03-13 | 2010-01-28 | Cluster Resources, Inc. | System and method for providing advanced reservations in a compute environment |
US20050251567A1 (en) * | 2004-04-15 | 2005-11-10 | Raytheon Company | System and method for cluster management based on HPC architecture |
US20050235055A1 (en) * | 2004-04-15 | 2005-10-20 | Raytheon Company | Graphical user interface for managing HPC clusters |
US7203864B2 (en) * | 2004-06-25 | 2007-04-10 | Hewlett-Packard Development Company, L.P. | Method and system for clustering computers into peer groups and comparing individual computers to their peers |
US20060080323A1 (en) * | 2004-09-30 | 2006-04-13 | Wong Ryan H Y | Apparatus and method for report publication in a federated cluster |
US20060085785A1 (en) * | 2004-10-15 | 2006-04-20 | Emc Corporation | Method and apparatus for configuring, monitoring and/or managing resource groups including a virtual machine |
US20060190602A1 (en) * | 2005-02-23 | 2006-08-24 | At&T Corp. | Monitoring for replica placement and request distribution |
US20080216081A1 (en) * | 2005-03-11 | 2008-09-04 | Cluster Resources, Inc. | System and Method For Enforcing Future Policies in a Compute Environment |
US20060212332A1 (en) * | 2005-03-16 | 2006-09-21 | Cluster Resources, Inc. | Simple integration of on-demand compute environment |
US20060230149A1 (en) * | 2005-04-07 | 2006-10-12 | Cluster Resources, Inc. | On-Demand Access to Compute Resources |
US20060248371A1 (en) * | 2005-04-28 | 2006-11-02 | International Business Machines Corporation | Method and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center |
US20070067481A1 (en) * | 2005-08-23 | 2007-03-22 | Viswa Sharma | Omni-protocol engine for reconfigurable bit-stream processing in high-speed networks |
US20070156813A1 (en) * | 2005-11-15 | 2007-07-05 | California Institute Of Technology | Method and apparatus for collaborative system |
US20080177690A1 (en) * | 2006-01-19 | 2008-07-24 | Mhave, Llc. | Rules Engine for Enterprise System |
US20080043769A1 (en) * | 2006-08-16 | 2008-02-21 | Tyan Computer Corporation | Clustering system and system management architecture thereof |
US20080092058A1 (en) * | 2006-08-18 | 2008-04-17 | Akamai Technologies, Inc. | Method of data collection among participating content providers in a distributed network |
US20080086523A1 (en) * | 2006-08-18 | 2008-04-10 | Akamai Technologies, Inc. | Method of data collection in a distributed network |
US20080086524A1 (en) * | 2006-08-18 | 2008-04-10 | Akamai Technologies, Inc. | Method and system for identifying valid users operating across a distributed network |
US20080070550A1 (en) * | 2006-09-20 | 2008-03-20 | Hose David A | Providing Subscriber Specific Information Across Wireless Networks |
US20080120403A1 (en) * | 2006-11-22 | 2008-05-22 | Dell Products L.P. | Systems and Methods for Provisioning Homogeneous Servers |
US20090019535A1 (en) * | 2007-07-10 | 2009-01-15 | Ragingwire Enterprise Solutions, Inc. | Method and remote system for creating a customized server infrastructure in real time |
US7822841B2 (en) * | 2007-10-30 | 2010-10-26 | Modern Grids, Inc. | Method and system for hosting multiple, customized computing clusters |
Cited By (77)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11960937B2 (en) | 2004-03-13 | 2024-04-16 | Iii Holdings 12, Llc | System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter |
US12124878B2 (en) | 2004-03-13 | 2024-10-22 | Iii Holdings 12, Llc | System and method for scheduling resources within a compute environment using a scheduler process with reservation mask function |
US11467883B2 (en) | 2004-03-13 | 2022-10-11 | Iii Holdings 12, Llc | Co-allocating a reservation spanning different compute resources types |
US11652706B2 (en) | 2004-06-18 | 2023-05-16 | Iii Holdings 12, Llc | System and method for providing dynamic provisioning within a compute environment |
US12009996B2 (en) | 2004-06-18 | 2024-06-11 | Iii Holdings 12, Llc | System and method for providing dynamic provisioning within a compute environment |
US11630704B2 (en) | 2004-08-20 | 2023-04-18 | Iii Holdings 12, Llc | System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information |
US11709709B2 (en) | 2004-11-08 | 2023-07-25 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US12039370B2 (en) | 2004-11-08 | 2024-07-16 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11861404B2 (en) | 2004-11-08 | 2024-01-02 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11494235B2 (en) | 2004-11-08 | 2022-11-08 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11537434B2 (en) | 2004-11-08 | 2022-12-27 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11762694B2 (en) | 2004-11-08 | 2023-09-19 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US12008405B2 (en) | 2004-11-08 | 2024-06-11 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11537435B2 (en) | 2004-11-08 | 2022-12-27 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11656907B2 (en) | 2004-11-08 | 2023-05-23 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11886915B2 (en) | 2004-11-08 | 2024-01-30 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US12120040B2 (en) | 2005-03-16 | 2024-10-15 | Iii Holdings 12, Llc | On-demand compute environment |
US11658916B2 (en) | 2005-03-16 | 2023-05-23 | Iii Holdings 12, Llc | Simple integration of an on-demand compute environment |
US11765101B2 (en) | 2005-04-07 | 2023-09-19 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11533274B2 (en) | 2005-04-07 | 2022-12-20 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11522811B2 (en) | 2005-04-07 | 2022-12-06 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11496415B2 (en) | 2005-04-07 | 2022-11-08 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11831564B2 (en) | 2005-04-07 | 2023-11-28 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11650857B2 (en) | 2006-03-16 | 2023-05-16 | Iii Holdings 12, Llc | System and method for managing a hybrid computer environment |
US11522952B2 (en) | 2007-09-24 | 2022-12-06 | The Research Foundation For The State University Of New York | Automatic clustering for self-organizing grids |
US20100281095A1 (en) * | 2009-04-21 | 2010-11-04 | Wehner Camille B | Mobile grid computing |
US9465771B2 (en) | 2009-09-24 | 2016-10-11 | Iii Holdings 2, Llc | Server on a chip and node cards comprising one or more of same |
US9454403B2 (en) | 2009-10-30 | 2016-09-27 | Iii Holdings 2, Llc | System and method for high-performance, low-power data center interconnect fabric |
US9077654B2 (en) | 2009-10-30 | 2015-07-07 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging managed server SOCs |
US9876735B2 (en) | 2009-10-30 | 2018-01-23 | Iii Holdings 2, Llc | Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect |
US9929976B2 (en) | 2009-10-30 | 2018-03-27 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging managed server SOCs |
US9866477B2 (en) | 2009-10-30 | 2018-01-09 | Iii Holdings 2, Llc | System and method for high-performance, low-power data center interconnect fabric |
US9977763B2 (en) | 2009-10-30 | 2018-05-22 | Iii Holdings 2, Llc | Network proxy for high-performance, low-power data center interconnect fabric |
US11526304B2 (en) | 2009-10-30 | 2022-12-13 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US10050970B2 (en) | 2009-10-30 | 2018-08-14 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging server SOCs or server fabrics |
US10135731B2 (en) | 2009-10-30 | 2018-11-20 | Iii Holdings 2, Llc | Remote memory access functionality in a cluster of data processing nodes |
US10140245B2 (en) | 2009-10-30 | 2018-11-27 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9479463B2 (en) | 2009-10-30 | 2016-10-25 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging managed server SOCs |
US9008079B2 (en) | 2009-10-30 | 2015-04-14 | Iii Holdings 2, Llc | System and method for high-performance, low-power data center interconnect fabric |
US9054990B2 (en) | 2009-10-30 | 2015-06-09 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging server SOCs or server fabrics |
US9509552B2 (en) | 2009-10-30 | 2016-11-29 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging server SOCs or server fabrics |
US10877695B2 (en) | 2009-10-30 | 2020-12-29 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9075655B2 (en) | 2009-10-30 | 2015-07-07 | Iii Holdings 2, Llc | System and method for high-performance, low-power data center interconnect fabric with broadcast or multicast addressing |
US11720290B2 (en) | 2009-10-30 | 2023-08-08 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9405584B2 (en) | 2009-10-30 | 2016-08-02 | Iii Holdings 2, Llc | System and method for high-performance, low-power data center interconnect fabric with addressing and unicast routing |
US9262225B2 (en) | 2009-10-30 | 2016-02-16 | Iii Holdings 2, Llc | Remote memory access functionality in a cluster of data processing nodes |
US9311269B2 (en) | 2009-10-30 | 2016-04-12 | Iii Holdings 2, Llc | Network proxy for high-performance, low-power data center interconnect fabric |
US9749326B2 (en) | 2009-10-30 | 2017-08-29 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging server SOCs or server fabrics |
US9680770B2 (en) | 2009-10-30 | 2017-06-13 | Iii Holdings 2, Llc | System and method for using a multi-protocol fabric module across a distributed server interconnect fabric |
AU2012273370B2 (en) * | 2011-06-21 | 2015-12-24 | Intel Corporation | Native cloud computing via network segmentation |
US8725875B2 (en) | 2011-06-21 | 2014-05-13 | Intel Corporation | Native cloud computing via network segmentation |
CN103620578A (en) * | 2011-06-21 | 2014-03-05 | 英特尔公司 | Native cloud computing via network segmentation |
WO2012177359A3 (en) * | 2011-06-21 | 2013-02-28 | Intel Corporation | Native cloud computing via network segmentation |
US9585281B2 (en) | 2011-10-28 | 2017-02-28 | Iii Holdings 2, Llc | System and method for flexible storage and networking provisioning in large scalable processor installations |
US10021806B2 (en) | 2011-10-28 | 2018-07-10 | Iii Holdings 2, Llc | System and method for flexible storage and networking provisioning in large scalable processor installations |
US9069929B2 (en) | 2011-10-31 | 2015-06-30 | Iii Holdings 2, Llc | Arbitrating usage of serial port in node card of scalable and modular servers |
US9965442B2 (en) | 2011-10-31 | 2018-05-08 | Iii Holdings 2, Llc | Node card management in a modular and large scalable server system |
US9792249B2 (en) | 2011-10-31 | 2017-10-17 | Iii Holdings 2, Llc | Node card utilizing a same connector to communicate pluralities of signals |
US9092594B2 (en) | 2011-10-31 | 2015-07-28 | Iii Holdings 2, Llc | Node card management in a modular and large scalable server system |
US20130152191A1 (en) * | 2011-12-13 | 2013-06-13 | David Andrew Bright | Timing management in a large firewall cluster |
US8955097B2 (en) * | 2011-12-13 | 2015-02-10 | Mcafee, Inc. | Timing management in a large firewall cluster |
US10721209B2 (en) | 2011-12-13 | 2020-07-21 | Mcafee, Llc | Timing management in a large firewall cluster |
US9648102B1 (en) | 2012-12-27 | 2017-05-09 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9584544B2 (en) * | 2013-03-12 | 2017-02-28 | Red Hat Israel, Ltd. | Secured logical component for security in a virtual environment |
US20140282813A1 (en) * | 2013-03-12 | 2014-09-18 | Red Hat Israel, Ltd. | Secured logical component for security in a virtual environment |
US10659523B1 (en) | 2014-05-23 | 2020-05-19 | Amazon Technologies, Inc. | Isolating compute clusters created for a customer |
US11436268B2 (en) | 2014-09-30 | 2022-09-06 | Splunk Inc. | Multi-site cluster-based data intake and query systems |
US11789961B2 (en) | 2014-09-30 | 2023-10-17 | Splunk Inc. | Interaction with particular event for field selection |
US11768848B1 (en) | 2014-09-30 | 2023-09-26 | Splunk Inc. | Retrieving, modifying, and depositing shared search configuration into a shared data store |
US11748394B1 (en) | 2014-09-30 | 2023-09-05 | Splunk Inc. | Using indexers from multiple systems |
US11386109B2 (en) * | 2014-09-30 | 2022-07-12 | Splunk Inc. | Sharing configuration information through a shared storage location |
US20190188208A1 (en) * | 2014-09-30 | 2019-06-20 | Splunk Inc. | Sharing configuration information through a shared storage location |
US11106388B2 (en) | 2015-02-03 | 2021-08-31 | Netapp, Inc. | Monitoring storage cluster elements |
US10437510B2 (en) * | 2015-02-03 | 2019-10-08 | Netapp Inc. | Monitoring storage cluster elements |
US20160224277A1 (en) * | 2015-02-03 | 2016-08-04 | Netapp, Inc. | Monitoring storage cluster elements |
US20220224602A1 (en) * | 2019-04-25 | 2022-07-14 | Juniper Networks, Inc. | Multi-cluster configuration controller for software defined networks |
US11646941B2 (en) * | 2019-04-25 | 2023-05-09 | Juniper Networks, Inc. | Multi-cluster configuration controller for software defined networks |
Also Published As
Publication number | Publication date |
---|---|
US8352584B2 (en) | 2013-01-08 |
WO2009058642A3 (en) | 2009-07-09 |
US20090113051A1 (en) | 2009-04-30 |
WO2009058642A2 (en) | 2009-05-07 |
US7822841B2 (en) | 2010-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7822841B2 (en) | Method and system for hosting multiple, customized computing clusters | |
AU2016386887B2 (en) | Distributed edge processing of internet of things device data in co-location facilities | |
US10547693B2 (en) | Security device capability discovery and device selection | |
US7269641B2 (en) | Remote reconfiguration system | |
Legrand et al. | MonALISA: An agent based, dynamic service system to monitor, control and optimize distributed systems | |
CN100461150C (en) | Performing message and transformation adapter functions in a network element on behalf of an application | |
US7962635B2 (en) | Systems and methods for single session management in load balanced application server clusters | |
EP1952259B1 (en) | Non-centralized network device management using console communications system and method | |
US20060282886A1 (en) | Service oriented security device management network | |
US20110055899A1 (en) | Secure remote management of network devices with local processing and secure shell for remote distribution of information | |
CN101523808A (en) | Network service usage management systems and methods | |
WO2002093837A2 (en) | Broadband network service delivery method and device | |
US20150312364A1 (en) | Intelligent Global Services Bus and System for Mobile Applications | |
US7783786B1 (en) | Replicated service architecture | |
TW201243617A (en) | Cloud computing-based service management system | |
JP2013533555A (en) | FIX proxy server | |
KR20010074733A (en) | A method and apparatus for implementing a workgroup server array | |
Legrand et al. | Monitoring and control of large systems with MonALISA | |
US20030149740A1 (en) | Remote services delivery architecture | |
US20150066599A1 (en) | Method and apparatus for periodic diagnostics of tenant event streams | |
WO2013059661A1 (en) | Intelligent global services bus and system for mobile applications | |
Mahakalkar | Survey of Big Data Management on a Distributed Cloud | |
TW201617866A (en) | Method for providing fault-tolerant software as service platform | |
JP2000059362A (en) | Network fault management system | |
JP2002163169A (en) | Load dispersion controller, load dispersion type server system, and server load dispersion method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MODERN GRIDS, INC., COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FRANKLIN, JEFFREY B.;REEL/FRAME:025076/0497 Effective date: 20071029 |
|
AS | Assignment |
Owner name: LIGHT REFRACTURE LTD., LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MODERN GRIDS INC.;REEL/FRAME:027628/0417 Effective date: 20120110 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
AS | Assignment |
Owner name: CHEMTRON RESEARCH LLC, DELAWARE Free format text: MERGER;ASSIGNOR:LIGHT REFRACTURE LTD., LLC;REEL/FRAME:037404/0053 Effective date: 20150826 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: INTELLECTUAL VENTURES II LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHEMTRON RESEARCH LLC;REEL/FRAME:052088/0054 Effective date: 20200311 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |