US20200241916A1 - Legacy application migration to real time, parallel performance cloud - Google Patents
Legacy application migration to real time, parallel performance cloud Download PDFInfo
- Publication number
- US20200241916A1 US20200241916A1 US16/847,285 US202016847285A US2020241916A1 US 20200241916 A1 US20200241916 A1 US 20200241916A1 US 202016847285 A US202016847285 A US 202016847285A US 2020241916 A1 US2020241916 A1 US 2020241916A1
- Authority
- US
- United States
- Prior art keywords
- software application
- application
- legacy
- legacy software
- neuron
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/541—Interprogram communication via adapters, e.g. between incompatible applications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/067—Enterprise or organisation modelling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
Definitions
- the present invention relates to cloud computing, parallel processing, distributed computing, fault tolerance, and load balancing and more particularly, relates to a system and method for providing legacy software applications to be incorporated into a massively parallel and distribution processing model in the Cloud.
- Cloud computing is considered one of the top transformational IT changes and trends in 2010 and onwards. Cloud computing provides a scalable, dynamic, and distributed infrastructure for enabling applications to dynamically obtain and utilize computing resources on demand.
- Legacy system or application program is a previously deployed third party or internally developed customer application that continues to be used, typically because it still functions for the users' needs or is too expensive to replace, even though newer technology or more efficient methods of performing a task are now available.
- Legacy applications include programs having JAVA® command lines, client and server applications based on JAVA thin client, MICROSOFT® thin client and server-based applications, Client/Server applications, Client workstation applications, third party independent software vendor applications, and proprietary client applications running on proprietary architectures and operating systems.
- Legacy application programs are typically serially programmed, linear in performance, residing in only one data center and with a limited number of users constrained by geographic or business location.
- the cost and time to implement such a legacy application re-write can exceed the original cost and development time. Given these impediments, businesses are unable to adapt their existing applications to the cloud computing environment.
- the new Cloud paradigm offers huge business and technical value including parallel processing, distributed, use anywhere, high performance, high resiliency, high accessibility and high availability.
- the objective of the present invention is to provide a non-intrusive and assimilation model which enables these legacy applications to be transformed from standalone, serial-based computing to highly distributed, parallel processing and cooperating cloud application services.
- the innovation combines the cloud-centric, distributed, and parallel processing neuron platform infrastructure with a suite of legacy wrapper libraries to assimilate or “wrap” and encapsulate legacy applications and enable them to operate within the neuron cloud computing platform.
- the present invention incorporates technology and process innovation, using the assignee's distributed neuron platform computing design and high performance, highly resilient virtual server for the migration of legacy applications to cloud that is simple, quick and a fraction of the cost of traditional migration and porting solutions and referred to herein as the neuron platform or neuron platform and technology or neuron technology.
- a system for operating a legacy software application includes a distributed processing service.
- a wrapper software object is configured both to receive processing requests to a legacy software application from outside the distributed processing service and to send the processing requests using the distributed processing service.
- an encapsulated software object includes the legacy software application and an exoskeleton connection service.
- the exoskeleton connection service is both configured to accept processing requests from the distributed processing service, and mapped to an application programming interface of the legacy software application.
- the system for operating a legacy software application can also include a load and queue manager configured to select an encapsulated software object from the at least two encapsulated software objects, as well as a dispatcher software object configured to receive the request sent by the wrapper software object and send the request to the selected encapsulated software object.
- the load and queue manager can include a load balance software object configured to monitor the at least two encapsulated software objects, and a queue management software object configured to monitor a queue of units of work sent to one of the at least two encapsulated software objects.
- the load and queue manager can be configured to select the selected encapsulated software object in a round robin of the at least two encapsulated software objects.
- the load and queue manager can be configured to select the selected encapsulated software object based on a threshold.
- the load and queue manager can be configured to cause an additional encapsulated software object to be created if the load and queue manager determines that each of the at least two encapsulated software objects are unavailable.
- the system for operating a legacy software application can have a database configured to store a partial result of a previous execution of the encapsulated software object, and the load and queue manager can be configured to use the partial result as a parameter of the additional encapsulated software object.
- the dispatcher object can be configured to increment an in-flight count each when it sends a request to the selected encapsulated software object, and the system for operating a legacy software application can also include a complete message object that can be configured to decrement the in-flight count upon receiving an indication that the selected encapsulated software object has completed processing the request.
- a method for operating a legacy software application is also presented.
- a wrapper object in a cloud computing environment that includes a distributed processing service maps a processing request written in an application programming interface of a legacy software application to the protocols of the distributed processing service.
- the wrapper object sends the mapped processing request to a dispatcher object through the distributed processing service.
- the dispatcher object sends the mapped processing request both to a load balance object and to a queue management object through the distributed processing service.
- Either the load balance object or the queue management object deploys an instance of an encapsulated software object into the cloud computing environment based on a load evaluation.
- the dispatcher sends the mapped processing request to the instance of the encapsulated software object.
- the instance of the encapsulated software object re-maps the mapped request to the application programming interface of the legacy software application.
- An instance of the legacy software application within the encapsulated software object executes the re-mapped request.
- the mapped processing request can be modified by an additional software object.
- the system includes a distributed processing service.
- the system also includes a wrapper software object that is configured both to receive processing requests to a legacy software application from outside the distributed processing service and to send the processing requests using the distributed processing service.
- a wrapper software object that is configured both to receive processing requests to a legacy software application from outside the distributed processing service and to send the processing requests using the distributed processing service.
- the exoskeleton connection service is both configured to accept processing requests from the distributed processing service, and mapped to an application programming interface of the legacy software application.
- There is also a master dispatcher object configured to receive the request sent by the wrapper software object and send the request to one of at least two encapsulated software objects.
- system includes a slave dispatcher object configured to perform the functions of the master dispatcher object in the event communication between the master dispatcher object and the slave dispatcher object are disrupted.
- the master dispatcher object and the slave dispatcher object can be configured to operate on different servers of the cloud computing environment.
- FIG. 1 is a schematic diagram of a neuron cloud environment in which a neuron wrapped legacy application interacts in accordance with the present invention
- FIG. 2 is a flowchart illustrating the legacy application wrapper invocation model in accordance with the teachings of the present invention
- FIG. 3 is a diagram and flowchart illustrating the runtime processing model for a legacy application wrapped with a neuron operable on the neuron based cloud system in accordance with the teachings of the present invention.
- FIG. 4 is a schematic diagram and flowchart of a cloud integration model showing the interaction between master neurons, slave neurons and worker neurons in accordance with the teachings of the present invention.
- FIG. 1 The attached drawings and description (beginning with FIG. 1 ) highlight the neuron server (“Cortex”) environment 10 in accordance with the present invention and its ability quickly, cost effectively and seamlessly “wrap” existing products 12 with a high performance API 14 and connect seamlessly to the neuron Cloud Virtual Server (Cloud Cortex) 16 .
- the neuron server 16 has an interface to commercial cloud management systems 18 . Different cloud management systems and environments can be supported by extending the cloud management adapter library and not affecting the overall neuron platform. In this manner, cloud services can be monitored and adjusted 27 while also enabling cloud services to be configured 25 .
- the external interface (wrapper API 14 ) enables an existing software product 12 to fully leverage massively parallel, highly distributed, horizontally and vertically resilient and available scale and so increase in performance through neuron parallel processing, automatically clone multiple instances 20 of the same application to ensure volume and load is managed in parallel (rather than serially in current solutions) and enable access and instantiation of the application anywhere within the cloud while maintaining automatic demand and elasticity and fault tolerance 22 .
- the present invention features an innovative process and technology within its neuron platform service that enables legacy/third party non-cloud designed software applications to be incorporated into a massively parallel and distribution processing model in the Cloud.
- the process is made up of a number of simple steps using the neuron technology wherein:
- Each “wrapper”, once deployed, allows applications to be cloud enabled including parallel processing, on-demand vertical and horizontal scale, auto-cloning to enable load sharing and deep resiliency, high availability and clustering in either private or public cloud.
- the neuron solution is agnostic to the interface type or application requirements—Database, a Service, an abstract, table interface, HTTP sniffer or any number of methods to ensure maximum performance and connectivity.
- FIG. 2 An example of a wrapper invocation model is provided as 30 , FIG. 2 .
- the execution plan concept is as follows:
- the present invention can store partial results from previous executions and use those as parameters depending on the complexity of the operation at hand.
- the invoke method shown in FIG. 3 is based on the reflection process.
- the client can provide JAVA archive (JAR) files or provide from the beginning a collection of JAVA source files that will be incorporated in the project from an early stage, or simply have them added at runtime as a JAR type file import.
- JAR JAVA archive
- the client has the possibility to use different calls from different class instances (actually object instances for those classes) and organize those in a way (instruction code block) that will allow the client to organize the logic and create the proper integration of the code.
- a sample call (concept) is shown below:
- the applications are “wrapped”, the applications are represented as neurons 15 , FIG. 1 , within the neuron Platform.
- the applications can then be incorporated into neuron networks and linked to the different neuron applications, including the Design Studio 25 , Enterprise Control Manager 27 , and the Admin application 18 .
- the wrapped legacy neurons 15 now have configuration plans established.
- the neuron “cortex” platform orchestrates the remote instances of the configured neuron networks and enables the target legacy application to initiate multi-threaded requests, transmit and fully utilize a high availability, elastic and resilient architecture.
- This “cortex” platform is a high performance, cloud enabled virtual server that enables the wrapped legacy application 15 to dynamically create and remove new cloud nodes and automatically deploy new application images with remote instances of the legacy application operating within the neuron platform.
- Specific functions applied include: auto load share; distribution and accessibility across multiple data centers to utilize capacity anywhere within the cloud; auto-clone as and when necessary for automated capacity and performance balancing; scale vertically and horizontally without manual initiation; auto-restart in different location if the application encounters critical errors or failure to operated; and seamlessly integrates with other applications within the cloud as if the application has been fully and completely reconfigured for distributed, virtualized cloud operation.
- the API Wrapper neuron 14 provides a common interface and facility for receiving and processing external application requests.
- the API neuron marshals these requests to the Dispatcher neuron 24 for evaluation and processing.
- the Dispatcher neuron 24 transmits the durable messaging requests to the Intelligent Load Balance 28 and Queue Management 26 neurons, which are monitoring the neuron remote worker instances and work queues and assigning work to the legacy applications. Based on load evaluation, the Intelligent Load Balance and Queue Management neurons instantiates new cloud nodes based on demand and automatically propagate another remote instance and distributes work.
- the legacy application wrapped neuron 15 performs its work and then interacts with external applications automatically.
- the API Wrapper neuron interacts with Master neuron components, which manage requests, keep counts of work in process, perform load balancing, marshal work to distributed wrapped legacy application instances, instantiate new instances of the legacy applications and remove instances based on load and threshold levels.
- External transaction requests 102 occur through HTTP requests 104 .
- External transaction requests will initiate an API request to the Web Service Client (JAR) 106 .
- the Web Service Client 106 takes the Web Service Definition Language (WSDL) and turns it into a JAVA call 108 .
- the JAVA call request message 108 will be stored in the database. All units of work are persisted in a neuron persisted queue 109 for resilience, elasticity and high availability.
- the Web Service Client 106 sends a JMS broadcast 110 to subscribed Master and Slave Dispatchers 112 .
- a more detailed view of the master, slave and worker neuron relationship is shown in FIG. 4 .
- the Master API obtains an instruction from the wrapped application (a message, a volume spike, a system crash, request to access from a location or any number of other instructions) and forwards it to the Dispatcher neuron.
- the Dispatcher neuron communicates with the Load and Queue Manager. If the Dispatcher is no longer the primary master—for example it goes down or the Hardware or data center supporting it is no longer accessible, the Slave will take over, applying the required units of work or activity spooling up more processing power, moving a message, cloning the application, providing secure access to the application to the requesting remote user.
- the Load and Queue Manager keeps a running count of all units of work—or instructions and activities—and has to figure out which neuron Server 114 to hand it to for processing. Based on round robin and thresholds, it will hand the message to a Worker Instance. If all workers are busy or otherwise not available (for any reason), the Load and Queue Manager instantiates a new remote instance and server in real time. Based on thresholds, low work volume would remove remote instances and servers. The real time creation and destruction of instances, servers and nodes is a direct calibration to the volume of instructions and activities within the platform.
- Inter-process communications occurs between the Master Dispatcher and the Slave Dispatcher to ensure the viability of the Dispatcher. If the communications are disrupted, the Slave Dispatcher will become the new Master dispatcher and a second take Slave over as Dispatcher the Master will be instantiated.
- This auto-hierarchy enables deep resiliency—if the Master fails (a server failure or data center crash) and the Master can no longer communicate and control the network, its queue, functions and all activities are instantly passed to a Slave which automatically becomes a Master.
- the unit of work is passed from the Master neuron Server to the Remote neuron Server and the legacy wrapped neuron.
- the legacy wrapped neuron will perform the work.
- Multiple requests in flight are units of work. You can have applications or activities and have multiple neuron Servers. within a neuron Servers.
- the neuron network will do the work directly.
- the load and capacity increasing and decreasing the Remote neuron Servers will similarly destroy and instantiate new Remote neuron Servers and un-persist the message from the queue (apply a dual message).
- the complete message neuron decrements the in-flight count to ⁇ 1 (the in-memory queue).
- Wrapped legacy applications 15 communicate and interface through neuron messaging protocols and initiates external transaction updates.
- the benefits of the present invention include the ease of wrapping and incorporating existing applications into Cloud and distributed parallel processing infrastructure; rapid migration and transition to cloud infrastructures; automatic load and distribution of processing with platform-enabled instantiation of work queues, applications, and parallel processing on demand; and the ability to simultaneously process multiple concurrent applications sessions, reducing bottlenecks and accelerating complex, high resource processing into significantly shorter turnaround (processing jobs that take days can be reduced to hours and less).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- General Engineering & Computer Science (AREA)
- Operations Research (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Quality & Reliability (AREA)
- Marketing (AREA)
- Educational Administration (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multi Processors (AREA)
- Stored Programmes (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 13/442,353 entitled “Legacy Application Migration To Real Team, Parallel Performance Cloud”, which was filed on Apr. 9, 2012, and which claims the benefit of U.S. Provisional Patent Application No. 61/472,812, which was filed Apr. 7, 2011 and which in turn claims the benefit of U.S. patent application Ser. No. 12/870,348 filed on Aug. 27, 2010 and entitled “System and Method For Employing The Use Of Neural Networks For The Purpose Of Real-Time Business Intelligence And Automation Control”, all of which are incorporated herein by reference.
- The present invention relates to cloud computing, parallel processing, distributed computing, fault tolerance, and load balancing and more particularly, relates to a system and method for providing legacy software applications to be incorporated into a massively parallel and distribution processing model in the Cloud.
- Cloud computing is considered one of the top transformational IT changes and trends in 2010 and onwards. Cloud computing provides a scalable, dynamic, and distributed infrastructure for enabling applications to dynamically obtain and utilize computing resources on demand.
- Most businesses have existing computer software applications that are not engineered or architected for cloud computing. Many existing applications cannot perform distributed and parallel processing and are unable to be deployed on an elastic cloud computing environment without significant changes to the existing applications' source code and application architecture. The challenge is applying a cost effective and simple migration and transformation process for legacy applications and products so that they can be incorporated into the cloud computing environment.
- A legacy system or application program is a previously deployed third party or internally developed customer application that continues to be used, typically because it still functions for the users' needs or is too expensive to replace, even though newer technology or more efficient methods of performing a task are now available. Legacy applications include programs having JAVA® command lines, client and server applications based on JAVA thin client, MICROSOFT® thin client and server-based applications, Client/Server applications, Client workstation applications, third party independent software vendor applications, and proprietary client applications running on proprietary architectures and operating systems. Legacy application programs are typically serially programmed, linear in performance, residing in only one data center and with a limited number of users constrained by geographic or business location.
- To implement an existing legacy application in the cloud computing environment and enable the application to be distributed, parallel, and demand-elastic is a very expensive and time-consuming activity. The existing application architecture and source code needs to be re-factored and significantly rewritten, tested extensively, and re-integrated with its existing applications.
- The cost and time to implement such a legacy application re-write can exceed the original cost and development time. Given these impediments, businesses are unable to adapt their existing applications to the cloud computing environment. The new Cloud paradigm offers huge business and technical value including parallel processing, distributed, use anywhere, high performance, high resiliency, high accessibility and high availability.
- This significant divergence in technology deployment and accessibility creates a significant challenge: the cost effective and manageable ability to migrate applications, best processes and institutionalized intellectual property from serial, linear, data center or application confined applications to a new technology paradigm of agile, competitive, massively performed, use anywhere and broadly distributed/accessible functions. It is this challenge that the invention specifically addresses.
- Accordingly, the objective of the present invention is to provide a non-intrusive and assimilation model which enables these legacy applications to be transformed from standalone, serial-based computing to highly distributed, parallel processing and cooperating cloud application services. The innovation combines the cloud-centric, distributed, and parallel processing neuron platform infrastructure with a suite of legacy wrapper libraries to assimilate or “wrap” and encapsulate legacy applications and enable them to operate within the neuron cloud computing platform.
- The present invention incorporates technology and process innovation, using the assignee's distributed neuron platform computing design and high performance, highly resilient virtual server for the migration of legacy applications to cloud that is simple, quick and a fraction of the cost of traditional migration and porting solutions and referred to herein as the neuron platform or neuron platform and technology or neuron technology.
- A system for operating a legacy software application is presented. The system includes a distributed processing service. A wrapper software object is configured both to receive processing requests to a legacy software application from outside the distributed processing service and to send the processing requests using the distributed processing service. Additionally, an encapsulated software object includes the legacy software application and an exoskeleton connection service. The exoskeleton connection service is both configured to accept processing requests from the distributed processing service, and mapped to an application programming interface of the legacy software application.
- Additionally, there can be at least two of the encapsulated software objects in the system for operating a legacy software application.
- The system for operating a legacy software application can also include a load and queue manager configured to select an encapsulated software object from the at least two encapsulated software objects, as well as a dispatcher software object configured to receive the request sent by the wrapper software object and send the request to the selected encapsulated software object.
- The load and queue manager can include a load balance software object configured to monitor the at least two encapsulated software objects, and a queue management software object configured to monitor a queue of units of work sent to one of the at least two encapsulated software objects.
- The load and queue manager can be configured to select the selected encapsulated software object in a round robin of the at least two encapsulated software objects. The load and queue manager can be configured to select the selected encapsulated software object based on a threshold.
- The load and queue manager can be configured to cause an additional encapsulated software object to be created if the load and queue manager determines that each of the at least two encapsulated software objects are unavailable. The system for operating a legacy software application can have a database configured to store a partial result of a previous execution of the encapsulated software object, and the load and queue manager can be configured to use the partial result as a parameter of the additional encapsulated software object.
- The dispatcher object can be configured to increment an in-flight count each when it sends a request to the selected encapsulated software object, and the system for operating a legacy software application can also include a complete message object that can be configured to decrement the in-flight count upon receiving an indication that the selected encapsulated software object has completed processing the request.
- A method for operating a legacy software application is also presented. A wrapper object in a cloud computing environment that includes a distributed processing service maps a processing request written in an application programming interface of a legacy software application to the protocols of the distributed processing service. The wrapper object sends the mapped processing request to a dispatcher object through the distributed processing service. The dispatcher object sends the mapped processing request both to a load balance object and to a queue management object through the distributed processing service. Either the load balance object or the queue management object deploys an instance of an encapsulated software object into the cloud computing environment based on a load evaluation. The dispatcher sends the mapped processing request to the instance of the encapsulated software object. The instance of the encapsulated software object re-maps the mapped request to the application programming interface of the legacy software application. An instance of the legacy software application within the encapsulated software object executes the re-mapped request. The mapped processing request can be modified by an additional software object.
- Another system for operating a legacy software application is presented. The system includes a distributed processing service. The system also includes a wrapper software object that is configured both to receive processing requests to a legacy software application from outside the distributed processing service and to send the processing requests using the distributed processing service. Additionally, there are at least two of an encapsulated software object that has the legacy software application and an exoskeleton connection service. The exoskeleton connection service is both configured to accept processing requests from the distributed processing service, and mapped to an application programming interface of the legacy software application. There is also a master dispatcher object configured to receive the request sent by the wrapper software object and send the request to one of at least two encapsulated software objects. Further, the system includes a slave dispatcher object configured to perform the functions of the master dispatcher object in the event communication between the master dispatcher object and the slave dispatcher object are disrupted. The master dispatcher object and the slave dispatcher object can be configured to operate on different servers of the cloud computing environment.
- These and other features and advantages of the present invention will be better understood by reading the following detailed description, taken together with the drawings wherein:
-
FIG. 1 is a schematic diagram of a neuron cloud environment in which a neuron wrapped legacy application interacts in accordance with the present invention; -
FIG. 2 is a flowchart illustrating the legacy application wrapper invocation model in accordance with the teachings of the present invention; -
FIG. 3 is a diagram and flowchart illustrating the runtime processing model for a legacy application wrapped with a neuron operable on the neuron based cloud system in accordance with the teachings of the present invention; and -
FIG. 4 is a schematic diagram and flowchart of a cloud integration model showing the interaction between master neurons, slave neurons and worker neurons in accordance with the teachings of the present invention. - The attached drawings and description (beginning with
FIG. 1 ) highlight the neuron server (“Cortex”)environment 10 in accordance with the present invention and its ability quickly, cost effectively and seamlessly “wrap” existingproducts 12 with ahigh performance API 14 and connect seamlessly to the neuron Cloud Virtual Server (Cloud Cortex) 16. Theneuron server 16 has an interface to commercialcloud management systems 18. Different cloud management systems and environments can be supported by extending the cloud management adapter library and not affecting the overall neuron platform. In this manner, cloud services can be monitored and adjusted 27 while also enabling cloud services to be configured 25. - The external interface (wrapper API 14) enables an existing
software product 12 to fully leverage massively parallel, highly distributed, horizontally and vertically resilient and available scale and so increase in performance through neuron parallel processing, automatically clonemultiple instances 20 of the same application to ensure volume and load is managed in parallel (rather than serially in current solutions) and enable access and instantiation of the application anywhere within the cloud while maintaining automatic demand and elasticity andfault tolerance 22. - The present invention features an innovative process and technology within its neuron platform service that enables legacy/third party non-cloud designed software applications to be incorporated into a massively parallel and distribution processing model in the Cloud. The process is made up of a number of simple steps using the neuron technology wherein:
- A. Existing legacy applications are “wrapped” with a neuron service Interface or “Wrapper”/“Exoskeleton”. These “wrappers” are incorporated into a neuron wrapper library and preconfigured and enable “off the shelf” available interface instantiations specifically mapped to the target application's API. The “wrapper” is best considered an exoskeleton connection service which includes an encapsulation of an API, all base neuron functions and characteristics, security logic specific to the technology and services enabling the connection to the hyper fast, resilient, clustered neuron “cortex” Platform residing in the cloud. Wrappers are off the shelf and aligned to the specific type of technology including:
- JAVA based command-level programs;
- JAVA based Thin client server based applications;
- MICROSOFT WINDOWS® Thin client server based applications;
- Fat Client applications;
- Third Party applications; and
- Mainframe applications.
- Each “wrapper”, once deployed, allows applications to be cloud enabled including parallel processing, on-demand vertical and horizontal scale, auto-cloning to enable load sharing and deep resiliency, high availability and clustering in either private or public cloud.
- The neuron solution is agnostic to the interface type or application requirements—Database, a Service, an abstract, table interface, HTTP sniffer or any number of methods to ensure maximum performance and connectivity.
- These wrappers, in simple terms, enable different applications to interact transparently with the neuron platform.
- An example of a wrapper invocation model is provided as 30,
FIG. 2 . The execution plan concept is as follows: -
invoke(method1, paramlist) invoke(method2, paramlist) ... Invoke(method3, paramlist) - If required, the present invention can store partial results from previous executions and use those as parameters depending on the complexity of the operation at hand. The invoke method shown in
FIG. 3 is based on the reflection process. In this way, the client can provide JAVA archive (JAR) files or provide from the beginning a collection of JAVA source files that will be incorporated in the project from an early stage, or simply have them added at runtime as a JAR type file import. In this way, the client has the possibility to use different calls from different class instances (actually object instances for those classes) and organize those in a way (instruction code block) that will allow the client to organize the logic and create the proper integration of the code. - A sample call (concept) is shown below:
-
Aux inst1val=objConf1instance.invoke(“methodconfigname1”, paraminputList) Aux inst2val=objConf1instance.invoke(“methodconfigname1”, paraminputList.Inst1val) .... Rez instWval=objconfxinstance.invoke(“methodw”,parameters) .... Rez instNval=objconfxinstance.invoke(“methodn”,parameters) Result=OriginalXmlRequest + InstWval + InstNval (tags) - Once the applications are “wrapped”, the applications are represented as
neurons 15,FIG. 1 , within the neuron Platform. The applications can then be incorporated into neuron networks and linked to the different neuron applications, including theDesign Studio 25,Enterprise Control Manager 27, and theAdmin application 18. The wrappedlegacy neurons 15 now have configuration plans established. During the runtime processing, the neuron “cortex” platform orchestrates the remote instances of the configured neuron networks and enables the target legacy application to initiate multi-threaded requests, transmit and fully utilize a high availability, elastic and resilient architecture. - This “cortex” platform is a high performance, cloud enabled virtual server that enables the wrapped
legacy application 15 to dynamically create and remove new cloud nodes and automatically deploy new application images with remote instances of the legacy application operating within the neuron platform. Specific functions applied include: auto load share; distribution and accessibility across multiple data centers to utilize capacity anywhere within the cloud; auto-clone as and when necessary for automated capacity and performance balancing; scale vertically and horizontally without manual initiation; auto-restart in different location if the application encounters critical errors or failure to operated; and seamlessly integrates with other applications within the cloud as if the application has been fully and completely reconfigured for distributed, virtualized cloud operation. - The
API Wrapper neuron 14 provides a common interface and facility for receiving and processing external application requests. The API neuron marshals these requests to theDispatcher neuron 24 for evaluation and processing. - The
Dispatcher neuron 24 transmits the durable messaging requests to theIntelligent Load Balance 28 andQueue Management 26 neurons, which are monitoring the neuron remote worker instances and work queues and assigning work to the legacy applications. Based on load evaluation, the Intelligent Load Balance and Queue Management neurons instantiates new cloud nodes based on demand and automatically propagate another remote instance and distributes work. - The legacy application wrapped
neuron 15 performs its work and then interacts with external applications automatically. The API Wrapper neuron interacts with Master neuron components, which manage requests, keep counts of work in process, perform load balancing, marshal work to distributed wrapped legacy application instances, instantiate new instances of the legacy applications and remove instances based on load and threshold levels. - External transaction requests 102,
FIG. 3 , occur through HTTP requests 104. External transaction requests will initiate an API request to the Web Service Client (JAR) 106. TheWeb Service Client 106 takes the Web Service Definition Language (WSDL) and turns it into aJAVA call 108. The JAVAcall request message 108 will be stored in the database. All units of work are persisted in a neuron persistedqueue 109 for resilience, elasticity and high availability. - The
Web Service Client 106 sends aJMS broadcast 110 to subscribed Master andSlave Dispatchers 112. There can be any number of Masters and Slaves and generally the number is a result of the amount of volume at any given time. This enables a dynamic creation and destruction of neurons which in turn allows real time volume spikes to be managed simply, allows for maximum use of available system resources no matter where within the cloud they reside and enables real time application management. A more detailed view of the master, slave and worker neuron relationship is shown inFIG. 4 . - The Master API obtains an instruction from the wrapped application (a message, a volume spike, a system crash, request to access from a location or any number of other instructions) and forwards it to the Dispatcher neuron. The Dispatcher neuron communicates with the Load and Queue Manager. If the Dispatcher is no longer the primary master—for example it goes down or the Hardware or data center supporting it is no longer accessible, the Slave will take over, applying the required units of work or activity spooling up more processing power, moving a message, cloning the application, providing secure access to the application to the requesting remote user.
- The Load and Queue Manager keeps a running count of all units of work—or instructions and activities—and has to figure out which neuron Server 114 to hand it to for processing. Based on round robin and thresholds, it will hand the message to a Worker Instance. If all workers are busy or otherwise not available (for any reason), the Load and Queue Manager instantiates a new remote instance and server in real time. Based on thresholds, low work volume would remove remote instances and servers. The real time creation and destruction of instances, servers and nodes is a direct calibration to the volume of instructions and activities within the platform.
- Inter-process communications occurs between the Master Dispatcher and the Slave Dispatcher to ensure the viability of the Dispatcher. If the communications are disrupted, the Slave Dispatcher will become the new Master dispatcher and a second take Slave over as Dispatcher the Master will be instantiated. This auto-hierarchy enables deep resiliency—if the Master fails (a server failure or data center crash) and the Master can no longer communicate and control the network, its queue, functions and all activities are instantly passed to a Slave which automatically becomes a Master.
- The only time the Slave Dispatcher will get involved is switching from the master and it will use the persisted
queue checkpoint 109 to start processing. When the slave becomes the master, it just re-queues everything. - The unit of work is passed from the Master neuron Server to the Remote neuron Server and the legacy wrapped neuron. The legacy wrapped neuron will perform the work. Multiple requests in flight are units of work. You can have applications or activities and have multiple neuron Servers. within a neuron Servers.
- The neuron network will do the work directly. The load and capacity increasing and decreasing the Remote neuron Servers will similarly destroy and instantiate new Remote neuron Servers and un-persist the message from the queue (apply a dual message). The complete message neuron decrements the in-flight count to −1 (the in-memory queue).
- Wrapped
legacy applications 15 communicate and interface through neuron messaging protocols and initiates external transaction updates. - The benefits of the present invention include the ease of wrapping and incorporating existing applications into Cloud and distributed parallel processing infrastructure; rapid migration and transition to cloud infrastructures; automatic load and distribution of processing with platform-enabled instantiation of work queues, applications, and parallel processing on demand; and the ability to simultaneously process multiple concurrent applications sessions, reducing bottlenecks and accelerating complex, high resource processing into significantly shorter turnaround (processing jobs that take days can be reduced to hours and less).
- Modifications and substitutions by one of ordinary skill in the art are considered to be within the scope of the present invention, which is not to be limited except by the allowed claims and their legal equivalents.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/847,285 US20200241916A1 (en) | 2011-04-07 | 2020-04-13 | Legacy application migration to real time, parallel performance cloud |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161472812P | 2011-04-07 | 2011-04-07 | |
US13/442,353 US9558441B2 (en) | 2009-08-28 | 2012-04-09 | Legacy application migration to real time, parallel performance cloud |
US15/419,937 US10620990B2 (en) | 2010-08-27 | 2017-01-30 | Legacy application migration to real time, parallel performance cloud |
US16/847,285 US20200241916A1 (en) | 2011-04-07 | 2020-04-13 | Legacy application migration to real time, parallel performance cloud |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/419,937 Continuation US10620990B2 (en) | 2010-08-27 | 2017-01-30 | Legacy application migration to real time, parallel performance cloud |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200241916A1 true US20200241916A1 (en) | 2020-07-30 |
Family
ID=46969588
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/442,353 Active US9558441B2 (en) | 2009-08-28 | 2012-04-09 | Legacy application migration to real time, parallel performance cloud |
US15/419,937 Active 2033-04-05 US10620990B2 (en) | 2010-08-27 | 2017-01-30 | Legacy application migration to real time, parallel performance cloud |
US16/847,285 Abandoned US20200241916A1 (en) | 2011-04-07 | 2020-04-13 | Legacy application migration to real time, parallel performance cloud |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/442,353 Active US9558441B2 (en) | 2009-08-28 | 2012-04-09 | Legacy application migration to real time, parallel performance cloud |
US15/419,937 Active 2033-04-05 US10620990B2 (en) | 2010-08-27 | 2017-01-30 | Legacy application migration to real time, parallel performance cloud |
Country Status (5)
Country | Link |
---|---|
US (3) | US9558441B2 (en) |
EP (1) | EP2695050A4 (en) |
CA (1) | CA2832444C (en) |
IL (1) | IL228684A (en) |
WO (1) | WO2012139098A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210263736A1 (en) * | 2020-02-24 | 2021-08-26 | Mobilize.Net Corporation | Semantic functional wrappers of services |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9020868B2 (en) * | 2010-08-27 | 2015-04-28 | Pneuron Corp. | Distributed analytics method for creating, modifying, and deploying software pneurons to acquire, review, analyze targeted data |
US8549656B2 (en) * | 2011-02-11 | 2013-10-01 | Mocana Corporation | Securing and managing apps on a device |
US8990920B2 (en) | 2011-02-11 | 2015-03-24 | Mocana Corporation | Creating a virtual private network (VPN) for a single app on an internet-enabled device or system |
US9537869B2 (en) | 2011-02-11 | 2017-01-03 | Blue Cedar Networks, Inc. | Geographical restrictions for application usage on a mobile device |
US9306933B2 (en) | 2011-02-11 | 2016-04-05 | Mocana Corporation | Ensuring network connection security between a wrapped app and a remote server |
US9223632B2 (en) | 2011-05-20 | 2015-12-29 | Microsoft Technology Licensing, Llc | Cross-cloud management and troubleshooting |
US20130080603A1 (en) * | 2011-09-27 | 2013-03-28 | Microsoft Corporation | Fault Tolerant External Application Server |
US9379903B1 (en) * | 2012-03-16 | 2016-06-28 | Google Inc. | Distributed scheduler |
US9892207B2 (en) * | 2013-02-01 | 2018-02-13 | Sap Se | Automatic migration for on-premise data objects to on-demand data objects |
WO2014130742A1 (en) * | 2013-02-20 | 2014-08-28 | The Digital Marvels, Inc. | Virtual storage system client user interface |
US9632840B2 (en) | 2014-04-22 | 2017-04-25 | International Business Machines Corporation | Load balancing with granularly redistributable workloads |
US9672353B2 (en) | 2014-04-28 | 2017-06-06 | Blue Cedar Networks, Inc. | Securing and managing apps on a device using policy gates |
EP3167366A4 (en) * | 2014-07-08 | 2018-08-01 | Pneuron, Corp. | Virtualized execution across distributed nodes |
US9836332B2 (en) * | 2014-07-31 | 2017-12-05 | Corent Technology, Inc. | Software defined SaaS platform |
US9684470B2 (en) | 2014-09-30 | 2017-06-20 | International Business Machines Corporation | Rapid migration to managed clouds with multiple change windows |
US9619266B2 (en) | 2014-10-10 | 2017-04-11 | International Business Machines Corporation | Tearing down virtual machines implementing parallel operators in a streaming application based on performance |
US10157214B1 (en) | 2014-11-19 | 2018-12-18 | Amazon Technologies, Inc. | Process for data migration between document stores |
US10324712B1 (en) * | 2014-12-24 | 2019-06-18 | Thomas A. Nolan | Method and system of migrating legacy code for upgraded systems |
US9954936B2 (en) | 2015-03-02 | 2018-04-24 | International Business Machines Corporation | Migrating legacy applications to a multi-tenant computing environment |
US10496710B2 (en) | 2015-04-29 | 2019-12-03 | Northrop Grumman Systems Corporation | Online data management system |
US10637735B2 (en) | 2015-08-26 | 2020-04-28 | International Business Machines Corporation | Pattern-based migration of workloads |
CN105303122B (en) * | 2015-10-13 | 2018-02-09 | 北京大学 | The method that the locking of sensitive data high in the clouds is realized based on reconfiguration technique |
US10462262B2 (en) * | 2016-01-06 | 2019-10-29 | Northrop Grumman Systems Corporation | Middleware abstraction layer (MAL) |
US10535002B2 (en) | 2016-02-26 | 2020-01-14 | International Business Machines Corporation | Event resolution as a dynamic service |
US10412192B2 (en) * | 2016-05-10 | 2019-09-10 | International Business Machines Corporation | Jointly managing a cloud and non-cloud environment |
CN106250112A (en) * | 2016-07-19 | 2016-12-21 | 浪潮(北京)电子信息产业有限公司 | A kind of auxiliary system for developing software, method and software development system |
US10164859B2 (en) * | 2016-08-29 | 2018-12-25 | Salesforce.Com, Inc. | Methods and apparatus to perform elastic monitoring of software applications using embedded watchdogs |
US10698750B2 (en) | 2017-04-24 | 2020-06-30 | At&T Intellectual Property I, L.P. | Cross-vertical service development |
US10628280B1 (en) | 2018-02-06 | 2020-04-21 | Northrop Grumman Systems Corporation | Event logger |
US11257184B1 (en) | 2018-02-21 | 2022-02-22 | Northrop Grumman Systems Corporation | Image scaler |
US11157003B1 (en) | 2018-04-05 | 2021-10-26 | Northrop Grumman Systems Corporation | Software framework for autonomous system |
US10715385B2 (en) | 2018-09-27 | 2020-07-14 | International Business Machines Corporation | System and method for live migration for software agents |
US11392284B1 (en) | 2018-11-01 | 2022-07-19 | Northrop Grumman Systems Corporation | System and method for implementing a dynamically stylable open graphics library |
US10782936B1 (en) * | 2019-01-30 | 2020-09-22 | Architecture Technology Corporation | Programming migration system and methods |
US10868717B2 (en) * | 2019-01-31 | 2020-12-15 | Hewlett Packard Enterprise Development Lp | Concurrent profile deployments |
US11221855B2 (en) | 2020-03-06 | 2022-01-11 | International Business Machines Corporation | Transformation of an enterprise application into a cloud native application |
US11221846B2 (en) | 2020-03-19 | 2022-01-11 | International Business Machines Corporation | Automated transformation of applications to a target computing environment |
US11803413B2 (en) | 2020-12-03 | 2023-10-31 | International Business Machines Corporation | Migrating complex legacy applications |
US11522949B1 (en) * | 2021-11-19 | 2022-12-06 | Jpmorgan Chase Bank, N.A. | Systems and methods for cloud-based hybrid service meshes in microservice architectures |
IT202200004481A1 (en) | 2022-03-09 | 2023-09-09 | Massimo ANELLA | OPERATING SYSTEM WITH SEMIAUTOMATIC TRANSFER OF BUSINESS APPLICATIONS |
US20230385730A1 (en) * | 2022-05-24 | 2023-11-30 | Red Hat, Inc. | Segmenting processes into stand-alone services |
Family Cites Families (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6470389B1 (en) * | 1997-03-14 | 2002-10-22 | Lucent Technologies Inc. | Hosting a network service on a cluster of servers using a single-address image |
US6289450B1 (en) | 1999-05-28 | 2001-09-11 | Authentica, Inc. | Information security architecture for encrypting documents for remote access while maintaining access control |
US6529909B1 (en) * | 1999-08-31 | 2003-03-04 | Accenture Llp | Method for translating an object attribute converter in an information services patterns environment |
US6636242B2 (en) | 1999-08-31 | 2003-10-21 | Accenture Llp | View configurer in a presentation services patterns environment |
US20050223392A1 (en) * | 2000-12-01 | 2005-10-06 | Cox Burke D | Method and system for integration of software applications |
US6757689B2 (en) | 2001-02-02 | 2004-06-29 | Hewlett-Packard Development Company, L.P. | Enabling a zero latency enterprise |
WO2002067401A2 (en) | 2001-02-16 | 2002-08-29 | Idax, Inc. | Decision support for automated power trading |
BR0210865A (en) | 2001-07-05 | 2004-06-29 | Computer Ass Think Inc | Method and system for identifying business events, and, computer readable storage media |
WO2003015026A1 (en) | 2001-08-10 | 2003-02-20 | Saffron Technology, Inc. | Artificial neurons including weights that define maximal projections |
US6636779B2 (en) * | 2001-10-26 | 2003-10-21 | Storage Technology Corporation | Tape library mirrored redundant controllers |
US20040024720A1 (en) | 2002-02-01 | 2004-02-05 | John Fairweather | System and method for managing knowledge |
US6946715B2 (en) | 2003-02-19 | 2005-09-20 | Micron Technology, Inc. | CMOS image sensor and method of fabrication |
US7340718B2 (en) * | 2002-09-30 | 2008-03-04 | Sap Ag | Unified rendering |
US20040122937A1 (en) | 2002-12-18 | 2004-06-24 | International Business Machines Corporation | System and method of tracking messaging flows in a distributed network |
US7010513B2 (en) | 2003-04-14 | 2006-03-07 | Tamura Raymond M | Software engine for multiple, parallel processing with neural networks |
US7529722B2 (en) | 2003-12-22 | 2009-05-05 | Dintecom, Inc. | Automatic creation of neuro-fuzzy expert system from online anlytical processing (OLAP) tools |
US20060184410A1 (en) | 2003-12-30 | 2006-08-17 | Shankar Ramamurthy | System and method for capture of user actions and use of capture data in business processes |
US10796364B2 (en) | 2004-04-15 | 2020-10-06 | Nyse Group, Inc. | Process for providing timely quality indication of market trades |
US8571011B2 (en) * | 2004-08-13 | 2013-10-29 | Verizon Business Global Llc | Method and system for providing voice over IP managed services utilizing a centralized data store |
US7557707B2 (en) | 2004-09-01 | 2009-07-07 | Microsoft Corporation | RFID enabled information systems utilizing a business application |
US8266237B2 (en) | 2005-04-20 | 2012-09-11 | Microsoft Corporation | Systems and methods for providing distributed, decentralized data storage and retrieval |
US8429630B2 (en) * | 2005-09-15 | 2013-04-23 | Ca, Inc. | Globally distributed utility computing cloud |
US20070078692A1 (en) | 2005-09-30 | 2007-04-05 | Vyas Bhavin J | System for determining the outcome of a business decision |
US8914618B2 (en) * | 2005-12-29 | 2014-12-16 | Intel Corporation | Instruction set architecture-based inter-sequencer communications with a heterogeneous resource |
US20090113049A1 (en) | 2006-04-12 | 2009-04-30 | Edsa Micro Corporation | Systems and methods for real-time forecasting and predicting of electrical peaks and managing the energy, health, reliability, and performance of electrical power systems based on an artificial adaptive neural network |
US7881990B2 (en) | 2006-11-30 | 2011-02-01 | Intuit Inc. | Automatic time tracking based on user interface events |
US8468244B2 (en) | 2007-01-05 | 2013-06-18 | Digital Doors, Inc. | Digital information infrastructure and method for security designated data and with granular data stores |
US8655939B2 (en) | 2007-01-05 | 2014-02-18 | Digital Doors, Inc. | Electromagnetic pulse (EMP) hardened information infrastructure with extractor, cloud dispersal, secure storage, content analysis and classification and method therefor |
US8626844B2 (en) | 2007-03-26 | 2014-01-07 | The Trustees Of Columbia University In The City Of New York | Methods and media for exchanging data between nodes of disconnected networks |
US20100064033A1 (en) * | 2008-09-08 | 2010-03-11 | Franco Travostino | Integration of an internal cloud infrastructure with existing enterprise services and systems |
US20100077205A1 (en) | 2008-09-19 | 2010-03-25 | Ekstrom Joseph J | System and Method for Cipher E-Mail Protection |
US8069242B2 (en) | 2008-11-14 | 2011-11-29 | Cisco Technology, Inc. | System, method, and software for integrating cloud computing systems |
US8423494B2 (en) * | 2009-04-15 | 2013-04-16 | Virginia Polytechnic Institute And State University | Complex situation analysis system that generates a social contact network, uses edge brokers and service brokers, and dynamically adds brokers |
SG178589A1 (en) * | 2009-08-28 | 2012-04-27 | Pneuron Corp | System and method using neural networks for real-time business intelligence and automation control |
US8490087B2 (en) * | 2009-12-02 | 2013-07-16 | International Business Machines Corporation | System and method for transforming legacy desktop environments to a virtualized desktop model |
CA2792871A1 (en) | 2010-03-11 | 2011-09-15 | Entegrity LLC | Methods and systems for data aggregation and reporting |
US20120102103A1 (en) * | 2010-10-20 | 2012-04-26 | Microsoft Corporation | Running legacy applications on cloud computing systems without rewriting |
WO2013049715A1 (en) | 2011-09-29 | 2013-04-04 | Cirro, Inc. | Federated query engine for federation of data queries across structure and unstructured data |
EP2761498A4 (en) | 2011-09-30 | 2015-08-26 | Cirro Inc | Spreadsheet based data store interface |
-
2012
- 2012-04-09 US US13/442,353 patent/US9558441B2/en active Active
- 2012-04-09 EP EP12767776.3A patent/EP2695050A4/en not_active Ceased
- 2012-04-09 WO PCT/US2012/032726 patent/WO2012139098A1/en active Application Filing
- 2012-04-09 CA CA2832444A patent/CA2832444C/en active Active
-
2013
- 2013-10-02 IL IL228684A patent/IL228684A/en active IP Right Grant
-
2017
- 2017-01-30 US US15/419,937 patent/US10620990B2/en active Active
-
2020
- 2020-04-13 US US16/847,285 patent/US20200241916A1/en not_active Abandoned
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210263736A1 (en) * | 2020-02-24 | 2021-08-26 | Mobilize.Net Corporation | Semantic functional wrappers of services |
US11789726B2 (en) * | 2020-02-24 | 2023-10-17 | Snowflake Inc. | Semantic functional wrappers of services |
US20230401058A1 (en) * | 2020-02-24 | 2023-12-14 | Snowflake Inc. | Semantic functional wrappers of services |
Also Published As
Publication number | Publication date |
---|---|
IL228684A0 (en) | 2013-12-31 |
CA2832444A1 (en) | 2012-10-11 |
US9558441B2 (en) | 2017-01-31 |
EP2695050A4 (en) | 2016-03-23 |
US10620990B2 (en) | 2020-04-14 |
EP2695050A1 (en) | 2014-02-12 |
WO2012139098A1 (en) | 2012-10-11 |
IL228684A (en) | 2017-05-29 |
US20120259909A1 (en) | 2012-10-11 |
CA2832444C (en) | 2017-10-17 |
US20170139741A1 (en) | 2017-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200241916A1 (en) | Legacy application migration to real time, parallel performance cloud | |
CN111543037B (en) | Event-driven serverless function orchestration | |
US11353838B2 (en) | Distributed computing in a process control environment | |
US9529582B2 (en) | Modular architecture for distributed system management | |
US10326832B2 (en) | Combining application and data tiers on different platforms to create workload distribution recommendations | |
Oyeniran et al. | Microservices architecture in cloud-native applications: Design patterns and scalability | |
US20230034835A1 (en) | Parallel Processing in Cloud | |
US11303521B1 (en) | Support platform with bi-directional communication channel for performing remote actions on computing devices | |
US11269691B2 (en) | Load distribution for integration scenarios | |
Almeida et al. | RIC-O: Efficient placement of a disaggregated and distributed RAN Intelligent Controller with dynamic clustering of radio nodes | |
Bharadwaj et al. | Transition of cloud computing from traditional applications to the cloud native approach | |
González et al. | HerdMonitor: monitoring live migrating containers in cloud environments | |
CN117099083A (en) | Scheduler for a planetary level computing system | |
US20190332442A1 (en) | Resource schedule optimization | |
US20200104173A1 (en) | Communication process load balancing in an automation engine | |
Mao et al. | A Load Balancing and Overload Controlling Architecture in Clouding Computing | |
US20230300086A1 (en) | On-demand resource capacity in a serverless function-as-a-service infrastructure | |
de Morais et al. | Cloud-aware middleware | |
Bunyakitanon et al. | Performance Measurement of Live Migration Algorithms | |
Rodrigues | An adaptive robotics middleware for a cloud-based bridgeOS | |
Joseph | Microservice Orchestration Strategies for Containerized Cloud Environments | |
CN117827454A (en) | Task processing method and device | |
Elsharkawey et al. | A proposed fault tolerance model for cloud system based on the distributed shared memory | |
Ramos et al. | Distributed generative data mining | |
CN116957465A (en) | Warehouse management system based on micro-service architecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: PHEURON CORP., NEW HAMPSHIRE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BACHELOR, DOUGLAS WILEY;CURBELO, RAUL HUGO;ELKINS, ELIZABETH WINTERS;AND OTHERS;SIGNING DATES FROM 20120503 TO 20120519;REEL/FRAME:052860/0292 Owner name: PNEURON CORP., NEW HAMPSHIRE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FOUNTAIN, THOMAS C.;REEL/FRAME:052860/0300 Effective date: 20131106 Owner name: UST GLOBAL (SINGAPORE) PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PNEURON CORP.;REEL/FRAME:052860/0314 Effective date: 20180112 |
|
AS | Assignment |
Owner name: PNEURON CORP., NEW HAMPSHIRE Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE NAME PREVIOUSLY RECORDED ON REEL 052860 FRAME 0292. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:BACHELOR, DOUGLAS WILEY;CURBELO, RAUL HUGO;ELKINS, ELIZABETH WINTERS;AND OTHERS;SIGNING DATES FROM 20120503 TO 20120519;REEL/FRAME:052985/0355 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
AS | Assignment |
Owner name: CITIBANK, N.A., AS AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNOR:UST GLOBAL (SINGAPORE) PTE. LIMITED;REEL/FRAME:058309/0929 Effective date: 20211203 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |