US20210089918A1 - Systems and methods for cooperative machine learning - Google Patents
Systems and methods for cooperative machine learning Download PDFInfo
- Publication number
- US20210089918A1 US20210089918A1 US17/115,395 US202017115395A US2021089918A1 US 20210089918 A1 US20210089918 A1 US 20210089918A1 US 202017115395 A US202017115395 A US 202017115395A US 2021089918 A1 US2021089918 A1 US 2021089918A1
- Authority
- US
- United States
- Prior art keywords
- machine learning
- learning model
- client
- computing device
- side machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
-
- H04L67/42—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/59—Providing operational support to end devices by off-loading in the network or by emulation, e.g. when they are unavailable
Definitions
- This disclosure relates to systems and methods for cooperative machine learning, for example, across multiple client computing platforms, the cloud enabling off-line deep neural network operations on client computing platforms, etc.
- Deep neural networks have traditionally been run on very fast computers with expensive graphics cards, due to their voracious appetite for computing power.
- there are few implementations for mobile many of the models are too big, and the implementations are not easy to use.
- Existing approaches use cloud machine learning APIs, but these suffer from very high latency.
- Exemplary implementations disclosed herein may allow mobile applications to use deep learning to understand structured data like images, video, audio, and text.
- exemplary implementations enable real-time applications of machine learning on mobile devices due to low latency facilitated by sharing of models between users.
- Exemplary implementations also solve issues related to siloing. For example, mobile developers may want to use feedback from users to develop smart models. If there is no cloud for centralized learning, every model trained by a user will be only as good as that user has made it. With a central cloud, users can collaborate to train much smarter models, as well as sharing new ones between devices. This may enable a whole host of new applications and features for mobile developers.
- the system may comprise one or more hardware processors configured by machine-readable instruction components.
- the components may comprise a communications component, a cloud neural network component, a synchronization component, a cloud learning component, and an optimization component.
- the communications component may be configured to facilitate communications between one or more client computing platforms and one or more servers.
- the one or more client computing platforms may include a first client computing platform.
- the first client computing platform may include a client-side machine learning model configured to facilitate deep neural network operations on structured data. The operations may be performed by a client application installed on the first client computing platform.
- the client application may locally access the client-side machine learning model in order to perform the operations.
- the cloud neural network component may include a cloud machine learning model configured to facilitate deep neural network operations on structured data.
- the synchronization component may be configured to facilitate sharing of model states between the cloud machine learning model and the client-side machine learning model.
- the cloud learning component may be configured to improve the cloud machine learning model based on usage of the application and user interactions with the first client computing platform.
- the optimization component may be configured to determine workflows between one or more client computing platforms and one or more servers that chain multiple machine learning models.
- the method may be performed by one or more hardware processors configured by machine-readable instructions.
- the method may include facilitating, at one or more servers, communications between one or more client computing platforms and one or more servers.
- the one or more client computing platforms may include a first client computing platform.
- the first client computing platform may include a client-side machine learning model configured to facilitate deep neural network operations on structured data.
- the operations may be performed by a client application installed on the first client computing platform.
- the client application may locally access the client-side machine learning model in order to perform the operations.
- the method may include facilitating, at one or more servers, deep neural network operations on structured data.
- the method may include facilitating, at one or more servers, sharing of model states between the cloud machine learning model and the client-side machine learning model.
- the method may include improving, at one or more servers, the cloud machine learning model based on usage of the application and user interactions with the first client computing platform.
- the method may include determining, at one or more servers, workflows between one or more client computing platforms and one or more servers that chain multiple machine learning models.
- FIG. 1 illustrates a system configured for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms, in accordance with one or more implementations.
- FIG. 2 illustrates a method for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms, in accordance with one or more implementations.
- FIG. 1 illustrates a system configured for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms, in accordance with one or more implementations.
- system 100 may include one or more server 102 .
- the server(s) 102 and client computing platforms 104 may be configured to communicate according to a client/server architecture, a peer-to-peer architecture, and/or other architectures.
- the users may access system 100 via client computing platform(s) 104 .
- the client computing platform(s) 104 may be configured to execute machine-readable instructions 106 .
- the machine-readable instructions 106 may include one or more of a client-side neural network component 108 , a client-side learning component 110 , and/or other components.
- the client-side neural network component 108 may include a client-side machine learning model configured to facilitate deep neural network operations on structured data.
- the structured data may include one or more of images, images within images, symbols, logos, objects, video, audio, text, and/or other structured data.
- the deep neural network operations may relate to deep learning, deep structured learning, hierarchical learning, deep machine learning, and/or other types of machine learning.
- a deep neural network operation may include a set of algorithms that attempt to model high level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and/or non-linear transformations.
- the deep neural network operations may be performed by a client application installed on a given client computing platform 104 .
- the client application may include a software application designed to run on the given client computing platform 104 .
- the client application may locally access the client-side machine learning model in order to perform the deep neural network operations. These operations may occur at very low latency because they are embedded on the client computing platform(s) 104 , as opposed to running in the cloud.
- the client-side neural network component 108 may be embedded in the client application installed on the given client computing platform 104 .
- the application may access the client-side neural network component 108 stored locally on the given client computing platform 104 .
- the client-side neural network component 108 may include a code library enabling the deep neural network operations.
- the client-side neural network component 108 may include multiple client-side learning models such that an arbitrary workflow can be applied to determine desired outputs.
- the client-side learning component 110 may be configured to improve the client-side machine learning model based on one or more of new model states received from other locations in system 100 (described further below), usage of the application, user interactions with the given client computing platform 104 , one or more other client computing platforms 104 , and/or other information. According to some implementations, sharing of information beyond model states may facilitate learning for a distributed machine learning model from multiple client computing platform(s) 104 and/or server(s) 102 .
- the client-side learning component 110 may be configured to improve the client-side machine learning model in real time or near-real time.
- learning from this data may occur in a software developer kit (SDK).
- SDK software developer kit
- the SDK may provide a low latency and no-wireless-network-connection capability to recognize new things in the world.
- learning may occur on an application programing interface (API) platform to benefit from data provided from multiple client computing platforms and/or other API platform users.
- API application programing interface
- the server(s) 102 may be configured to execute machine-readable instructions 112 .
- the machine-readable instructions 112 may include one or more of a communications component 114 , a cloud neural network component 116 , a synchronization component 118 , a cloud learning component 120 , an optimization component 122 , and/or other components.
- the communications component 114 may be configured to facilitate communications between client computing platform(s) 104 and server(s) 102 .
- the cloud neural network component 116 may include a cloud machine learning model configured to facilitate deep neural network operations on structured data. These deep neural network operations may be the same as or similar to the ones facilitated by the client-side machine learning model described in connection with client-side neural network component 108 .
- the synchronization component 118 may be configured to facilitate sharing of information between the cloud machine learning model and the client-side machine learning model. Such information may include model states of the cloud machine learning model and/or the client-side machine learning model.
- the machine model states may include machine model parameters.
- the sharing of model states occurs on a periodic basis. In some implementations, the sharing of model states occurs responsive to the client computing platform 104 coming online.
- the synchronization component 118 may be configured to facilitate sharing of model states between different client-side machine learning models embodied at different client computing platform(s) 104 .
- the synchronization component 118 may be configured to facilitate sharing of information beyond the model states between the cloud machine learning model and the client-side machine learning model.
- Such information beyond the model states may include one or more of images, images within images, symbols, logos, objects, tags, videos, audio, geolocation, accelerometer data, metadata, and/or other information.
- sharing of information beyond the model states may be subject to end user consent.
- sharing of some information beyond the model states may be one-way sharing from the client computing platform 104 .
- the cloud learning component 120 may be configured to improve the cloud machine learning model. Such improvements may be made based on one or more of new model states received from other locations in system 100 such as client computing platform(s) 104 , usage of the application, user interactions with client computing platform(s) 104 , and/or other information. In some implementations, the cloud learning component 120 may be configured to improve the cloud machine learning model in real time or near-real time.
- advertisements may be pushed to individual client computing platforms 104 based on the cloud machine learning model and/or other information received from client computing platform(s) 104 .
- the optimization component 122 may be configured to determine workflows between a given client computing platform 104 , one or more other client computing platforms 104 , and/or server(s) 102 that chain multiple machine learning models.
- the workflows may be determined in an arbitrary graph.
- the workflow determination may be based on one or more of availability of network connection, network bandwidth, latency, throughput, number of concepts recognized by the machine learning models, importance of low latency, importance of high accuracy, user's preferences, preferences associated with the given client computing platform 104 , and/or other information.
- a set of cooperating client computing platforms 104 may be those of the company's users. For individual ones of those client computing platforms 104 , the company may obtain an understanding of the type of data their users encounter in the world.
- her client computing platform 104 e.g., mobile phone
- a user may be able to train a machine learning model of her pet inside her home. That specific machine learning model may be pushed to server(s) 102 .
- the machine learning model may be deployed to run on the user's webcam to process live feeds of video in order to find any frames of video that contain her pet.
- This data may be reported as all video frames sent back to server(s) 102 .
- This data may be reported as select video frames depending on the recognition of the pet. This data may serve as analytical data (e.g., the fraction of frames where your pet was recognized).
- Another non-limiting example may involve deploying recognition of the same concept to multiple devices for surveillance.
- a machine learning model could be trained to recognize a concept, and then be deployed immediately so the data from each video frame could be fed back to a centralized dashboard via client computing platform(s) 104 and/or server(s) 102 to alert when the desired concepts are recognized.
- server(s) 102 , client computing platform(s) 104 , and/or external resources 124 may be operatively linked via one or more electronic communication links.
- electronic communication links may be established, at least in part, via a network such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes implementations in which server(s) 102 , client computing platform(s) 104 , and/or external resources 124 may be operatively linked via some other communication media.
- a given client computing platform 104 may include electronic storage 126 , one or more processors 130 configured to execute machine-readable instructions, and/or other components.
- the machine-readable instructions may be configured to enable a user associated with the given client computing platform 104 to interface with system 100 and/or external resources 124 , and/or provide other functionality attributed herein to client computing platform(s) 104 .
- the given client computing platform 104 may include one or more of a desktop computer, a laptop computer, a handheld computer, a tablet computing platform, a NetBook, a Smartphone, a gaming console, and/or other computing platforms.
- External resources 124 may include sources of information, hosts and/or providers of machine learning outside of system 100 , external entities participating with system 100 , and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources 124 may be provided by resources included in system 100 .
- Server(s) 102 may include electronic storage 128 , one or more processors 132 , and/or other components. Server(s) 102 may include communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. Illustration of server(s) 102 in FIG. 1 is not intended to be limiting. Server(s) 102 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server(s) 102 . For example, server(s) 102 may be implemented by a cloud of computing platforms operating together as server(s) 102 .
- Electronic storage 126 and 128 may comprise non-transitory storage media that electronically stores information.
- the electronic storage media of electronic storage 126 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with a given client computing platform 104 and/or removable storage that is removably connectable to the given client computing platform 104 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).
- a port e.g., a USB port, a firewire port, etc.
- a drive e.g., a disk drive, etc.
- the electronic storage media of electronic storage 128 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with server(s) 102 and/or removable storage that is removably connectable to server(s) 102 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).
- a port e.g., a USB port, a firewire port, etc.
- a drive e.g., a disk drive, etc.
- Electronic storage 126 and 128 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.
- Electronic storage 128 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).
- Electronic storage 126 and 128 may store software algorithms, information determined by processor(s) 126 and 128 , information received from server(s) 102 , information received from client computing platform(s) 104 , and/or other information that enables server(s) 102 and client puttinging platform(s) 104 to function as described herein.
- Processor(s) 130 may be configured to provide information processing capabilities in client computing platform(s) 104 .
- Processor(s) 132 may be configured to provide information processing capabilities in server(s) 102 .
- processor(s) 130 and 132 may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information.
- processor(s) 130 and 132 are shown in FIG. 1 as single entities, this is for illustrative purposes only.
- processor(s) 130 and 132 may each include a plurality of processing units.
- processor(s) 132 may represent processing functionality of a plurality of servers operating in coordination.
- the processor(s) 130 and/or 132 may be configured to execute machine-readable instruction components 108 , 110 , 114 , 116 , 118 , 120 , 122 , and/or other machine-readable instruction components.
- the processor(s) 130 and/or 132 may be configured to execute machine-readable instruction components 108 , 110 , 114 , 116 , 118 , 120 , 122 , and/or other machine-readable instruction components by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor(s) 130 and/or 132 .
- machine-readable instruction components 108 , 110 , 114 , 116 , 118 , 120 , and 122 are illustrated in FIG. 1 as being implemented within single processing units, in implementations in which processor(s) 130 and/or 132 include multiple processing units, one or more of machine-readable instruction components 108 , 110 , 114 , 116 , 118 , 120 , and/or 122 may be implemented remotely from the other components and/or subcomponents.
- machine-readable instruction components 108 , 110 , 114 , 116 , 118 , 120 , and/or 122 described herein is for illustrative purposes, and is not intended to be limiting, as any of machine-readable instruction components 108 , 110 , 114 , 116 , 118 , 120 , and/or 122 may provide more or less functionality than is described.
- machine-readable instruction components 108 , 110 , 114 , 116 , 118 , 120 , and/or 122 may be eliminated, and some or all of its functionality may be provided by other ones of machine-readable instruction components 108 , 110 , 114 , 116 , 118 , 120 , and/or 122 .
- processor(s) 130 and/or 132 may be configured to execute one or more additional machine-readable instruction components that may perform some or all of the functionality attributed below to one of machine-readable instruction components 108 , 110 , 114 , 116 , 118 , 120 , and/or 122 .
- FIG. 2 illustrates a method 200 for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms, in accordance with one or more implementations.
- the operations of method 200 presented below are intended to be illustrative. In some implementations, method 200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 200 are illustrated in FIG. 2 and described below is not intended to be limiting.
- method 200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- the one or more processing devices may include one or more devices executing some or all of the operations of method 200 in response to instructions stored electronically on an electronic storage medium.
- the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 200 .
- a client computing platform 104 A may perform deep neural network operations on structured data.
- the structured data may include one or more of images, images within images, symbols, logos, objects, video, audio, text, and/or other structured data obtained by or stored by the client computing platform 104 A.
- the deep neural network operations may be performed by a client application installed on the client computing platform 104 A.
- the client application may locally access the client-side machine learning model in order to perform the deep neural network operations.
- Operation 202 may be performed by one or more hardware processors configured to execute a machine-readable instruction component that is the same as or similar to client-side neural network component 108 (as described in connection with FIG. 1 ), in accordance with one or more implementations.
- the client computing platform 104 A may improve the client-side machine learning model based on one or more of new model states received from other locations in system 100 , usage of the application, user interactions with the given client computing platform 104 , and/or other information. Operation 204 may be performed by one or more hardware processors configured to execute a machine-readable instruction component that is the same as or similar to client-side learning component 110 (as described in connection with FIG. 1 ), in accordance with one or more implementations.
- server(s) 102 may perform deep neural network operations on structured data. These deep neural network operations may be the same as or similar to the ones facilitated by the client-side machine learning model described in connection with client-side neural network component 108 . Operation 206 may be performed by one or more hardware processors configured to execute a machine-readable instruction component that is the same as or similar to cloud neural network component 116 (as described in connection with FIG. 1 ), in accordance with one or more implementations.
- server(s) 102 may improve the cloud machine learning model. Such improvements may be made based on one or more of new model states received from other locations in system 100 , usage of the application, user interactions with client computing platform(s) 104 , and/or other information. Operation 208 may be performed by one or more hardware processors configured to execute a machine-readable instruction component that is the same as or similar to cloud learning component 120 (as described in connection with FIG. 1 ), in accordance with one or more implementations.
- information may be shared between the cloud machine learning model and the client-side machine learning model.
- Such information may include model states of the cloud machine learning model and/or the client-side machine learning model.
- the machine model states may include machine model parameters.
- the sharing may occur between client computing platform 104 A, server(s) 102 , and/or other client computing platform(s) 104 X.
- Operations 210 A, 210 B, and/or 210 C may be performed by one or more hardware processors configured to execute a machine-readable instruction component that is the same as or similar to synchronization component 118 (as described in connection with FIG. 1 ), in accordance with one or more implementations.
- information beyond the model states may be shared between the cloud machine learning model and the client-side machine learning model.
- Such information beyond the model states may include one or more of images, images within images, symbols, logos, objects, tags, videos, audio, geolocation, accelerometer data, metadata, and/or other information.
- sharing of information beyond the model states may be subject to end user consent. The sharing may occur between client computing platform 104 A, server(s) 102 , and/or other client computing platform(s) 104 X. In some implementations, sharing of some information beyond the model states may be one-way sharing from the client computing platform 104 .
- Operations 212 A, 212 B, and/or 212 C may be performed by one or more hardware processors configured to execute a machine-readable instruction component that is the same as or similar to synchronization component 118 (as described in connection with FIG. 1 ), in accordance with one or more implementations.
- Operations 214 and 216 illustrate that the learning processes of system 100 can be recursive.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Information Transfer Between Computers (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 15/276,655, filed Sep. 26, 2016, the content of which is incorporated herein in its entirety by reference.
- This disclosure relates to systems and methods for cooperative machine learning, for example, across multiple client computing platforms, the cloud enabling off-line deep neural network operations on client computing platforms, etc.
- Deep neural networks have traditionally been run on very fast computers with expensive graphics cards, due to their voracious appetite for computing power. Currently, however, it is difficult for mobile developers to use this technology. For example, there are few implementations for mobile, many of the models are too big, and the implementations are not easy to use. Existing approaches use cloud machine learning APIs, but these suffer from very high latency.
- Exemplary implementations disclosed herein may allow mobile applications to use deep learning to understand structured data like images, video, audio, and text. In particular, exemplary implementations enable real-time applications of machine learning on mobile devices due to low latency facilitated by sharing of models between users. Exemplary implementations also solve issues related to siloing. For example, mobile developers may want to use feedback from users to develop smart models. If there is no cloud for centralized learning, every model trained by a user will be only as good as that user has made it. With a central cloud, users can collaborate to train much smarter models, as well as sharing new ones between devices. This may enable a whole host of new applications and features for mobile developers.
- Accordingly, one aspect of the disclosure relates to a system configured for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms. The system may comprise one or more hardware processors configured by machine-readable instruction components. The components may comprise a communications component, a cloud neural network component, a synchronization component, a cloud learning component, and an optimization component. The communications component may be configured to facilitate communications between one or more client computing platforms and one or more servers. The one or more client computing platforms may include a first client computing platform. The first client computing platform may include a client-side machine learning model configured to facilitate deep neural network operations on structured data. The operations may be performed by a client application installed on the first client computing platform. The client application may locally access the client-side machine learning model in order to perform the operations. The cloud neural network component may include a cloud machine learning model configured to facilitate deep neural network operations on structured data. The synchronization component may be configured to facilitate sharing of model states between the cloud machine learning model and the client-side machine learning model. The cloud learning component may be configured to improve the cloud machine learning model based on usage of the application and user interactions with the first client computing platform. The optimization component may be configured to determine workflows between one or more client computing platforms and one or more servers that chain multiple machine learning models.
- Another aspect of the disclosure relates to a method for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms. The method may be performed by one or more hardware processors configured by machine-readable instructions. The method may include facilitating, at one or more servers, communications between one or more client computing platforms and one or more servers. The one or more client computing platforms may include a first client computing platform. The first client computing platform may include a client-side machine learning model configured to facilitate deep neural network operations on structured data. The operations may be performed by a client application installed on the first client computing platform. The client application may locally access the client-side machine learning model in order to perform the operations. The method may include facilitating, at one or more servers, deep neural network operations on structured data. The method may include facilitating, at one or more servers, sharing of model states between the cloud machine learning model and the client-side machine learning model. The method may include improving, at one or more servers, the cloud machine learning model based on usage of the application and user interactions with the first client computing platform. The method may include determining, at one or more servers, workflows between one or more client computing platforms and one or more servers that chain multiple machine learning models.
- These and other features, and characteristics of the present technology, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
-
FIG. 1 illustrates a system configured for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms, in accordance with one or more implementations. -
FIG. 2 illustrates a method for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms, in accordance with one or more implementations. -
FIG. 1 illustrates a system configured for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms, in accordance with one or more implementations. In some implementations,system 100 may include one ormore server 102. The server(s) 102 andclient computing platforms 104 may be configured to communicate according to a client/server architecture, a peer-to-peer architecture, and/or other architectures. The users may accesssystem 100 via client computing platform(s) 104. - The client computing platform(s) 104 may be configured to execute machine-
readable instructions 106. The machine-readable instructions 106 may include one or more of a client-sideneural network component 108, a client-side learning component 110, and/or other components. - The client-side
neural network component 108 may include a client-side machine learning model configured to facilitate deep neural network operations on structured data. The structured data may include one or more of images, images within images, symbols, logos, objects, video, audio, text, and/or other structured data. The deep neural network operations may relate to deep learning, deep structured learning, hierarchical learning, deep machine learning, and/or other types of machine learning. In some implementations, a deep neural network operation may include a set of algorithms that attempt to model high level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and/or non-linear transformations. - The deep neural network operations may be performed by a client application installed on a given
client computing platform 104. The client application may include a software application designed to run on the givenclient computing platform 104. The client application may locally access the client-side machine learning model in order to perform the deep neural network operations. These operations may occur at very low latency because they are embedded on the client computing platform(s) 104, as opposed to running in the cloud. - In some implementations, the client-side
neural network component 108 may be embedded in the client application installed on the givenclient computing platform 104. In some implementations, the application may access the client-sideneural network component 108 stored locally on the givenclient computing platform 104. In some implementations, the client-sideneural network component 108 may include a code library enabling the deep neural network operations. In some implementations, the client-sideneural network component 108 may include multiple client-side learning models such that an arbitrary workflow can be applied to determine desired outputs. - The client-
side learning component 110 may be configured to improve the client-side machine learning model based on one or more of new model states received from other locations in system 100 (described further below), usage of the application, user interactions with the givenclient computing platform 104, one or more otherclient computing platforms 104, and/or other information. According to some implementations, sharing of information beyond model states may facilitate learning for a distributed machine learning model from multiple client computing platform(s) 104 and/or server(s) 102. - In some implementations, the client-
side learning component 110 may be configured to improve the client-side machine learning model in real time or near-real time. In some implementations, learning from this data may occur in a software developer kit (SDK). The SDK may provide a low latency and no-wireless-network-connection capability to recognize new things in the world. In some implementations, learning may occur on an application programing interface (API) platform to benefit from data provided from multiple client computing platforms and/or other API platform users. - The server(s) 102 may be configured to execute machine-
readable instructions 112. The machine-readable instructions 112 may include one or more of acommunications component 114, a cloudneural network component 116, asynchronization component 118, acloud learning component 120, anoptimization component 122, and/or other components. - The
communications component 114 may be configured to facilitate communications between client computing platform(s) 104 and server(s) 102. - The cloud
neural network component 116 may include a cloud machine learning model configured to facilitate deep neural network operations on structured data. These deep neural network operations may be the same as or similar to the ones facilitated by the client-side machine learning model described in connection with client-sideneural network component 108. - The
synchronization component 118 may be configured to facilitate sharing of information between the cloud machine learning model and the client-side machine learning model. Such information may include model states of the cloud machine learning model and/or the client-side machine learning model. The machine model states may include machine model parameters. In some implementations, the sharing of model states occurs on a periodic basis. In some implementations, the sharing of model states occurs responsive to theclient computing platform 104 coming online. Thesynchronization component 118 may be configured to facilitate sharing of model states between different client-side machine learning models embodied at different client computing platform(s) 104. - The
synchronization component 118 may be configured to facilitate sharing of information beyond the model states between the cloud machine learning model and the client-side machine learning model. Such information beyond the model states may include one or more of images, images within images, symbols, logos, objects, tags, videos, audio, geolocation, accelerometer data, metadata, and/or other information. In some implementations, sharing of information beyond the model states may be subject to end user consent. In some implementations, sharing of some information beyond the model states may be one-way sharing from theclient computing platform 104. - The
cloud learning component 120 may be configured to improve the cloud machine learning model. Such improvements may be made based on one or more of new model states received from other locations insystem 100 such as client computing platform(s) 104, usage of the application, user interactions with client computing platform(s) 104, and/or other information. In some implementations, thecloud learning component 120 may be configured to improve the cloud machine learning model in real time or near-real time. - In some implementations, advertisements may be pushed to individual
client computing platforms 104 based on the cloud machine learning model and/or other information received from client computing platform(s) 104. - The
optimization component 122 may be configured to determine workflows between a givenclient computing platform 104, one or more otherclient computing platforms 104, and/or server(s) 102 that chain multiple machine learning models. In some implementations, the workflows may be determined in an arbitrary graph. In some implementations, the workflow determination may be based on one or more of availability of network connection, network bandwidth, latency, throughput, number of concepts recognized by the machine learning models, importance of low latency, importance of high accuracy, user's preferences, preferences associated with the givenclient computing platform 104, and/or other information. - By way of non-limiting example, for a company, a set of cooperating
client computing platforms 104 may be those of the company's users. For individual ones of thoseclient computing platforms 104, the company may obtain an understanding of the type of data their users encounter in the world. - As another illustrative example, using her client computing platform 104 (e.g., mobile phone), a user may be able to train a machine learning model of her pet inside her home. That specific machine learning model may be pushed to server(s) 102. The machine learning model may be deployed to run on the user's webcam to process live feeds of video in order to find any frames of video that contain her pet. This data may be reported as all video frames sent back to server(s) 102. This data may be reported as select video frames depending on the recognition of the pet. This data may serve as analytical data (e.g., the fraction of frames where your pet was recognized).
- Another non-limiting example may involve deploying recognition of the same concept to multiple devices for surveillance. A machine learning model could be trained to recognize a concept, and then be deployed immediately so the data from each video frame could be fed back to a centralized dashboard via client computing platform(s) 104 and/or server(s) 102 to alert when the desired concepts are recognized.
- In some implementations, server(s) 102, client computing platform(s) 104, and/or
external resources 124 may be operatively linked via one or more electronic communication links. For example, such electronic communication links may be established, at least in part, via a network such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes implementations in which server(s) 102, client computing platform(s) 104, and/orexternal resources 124 may be operatively linked via some other communication media. - A given
client computing platform 104 may includeelectronic storage 126, one ormore processors 130 configured to execute machine-readable instructions, and/or other components. The machine-readable instructions may be configured to enable a user associated with the givenclient computing platform 104 to interface withsystem 100 and/orexternal resources 124, and/or provide other functionality attributed herein to client computing platform(s) 104. By way of non-limiting example, the givenclient computing platform 104 may include one or more of a desktop computer, a laptop computer, a handheld computer, a tablet computing platform, a NetBook, a Smartphone, a gaming console, and/or other computing platforms. -
External resources 124 may include sources of information, hosts and/or providers of machine learning outside ofsystem 100, external entities participating withsystem 100, and/or other resources. In some implementations, some or all of the functionality attributed herein toexternal resources 124 may be provided by resources included insystem 100. - Server(s) 102 may include
electronic storage 128, one ormore processors 132, and/or other components. Server(s) 102 may include communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. Illustration of server(s) 102 inFIG. 1 is not intended to be limiting. Server(s) 102 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server(s) 102. For example, server(s) 102 may be implemented by a cloud of computing platforms operating together as server(s) 102. -
Electronic storage electronic storage 126 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with a givenclient computing platform 104 and/or removable storage that is removably connectable to the givenclient computing platform 104 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storage media ofelectronic storage 128 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with server(s) 102 and/or removable storage that is removably connectable to server(s) 102 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).Electronic storage Electronic storage 128 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).Electronic storage - Processor(s) 130 may be configured to provide information processing capabilities in client computing platform(s) 104. Processor(s) 132 may be configured to provide information processing capabilities in server(s) 102. As such, processor(s) 130 and 132 may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s) 130 and 132 are shown in
FIG. 1 as single entities, this is for illustrative purposes only. In some implementations, processor(s) 130 and 132 may each include a plurality of processing units. For example, processor(s) 132 may represent processing functionality of a plurality of servers operating in coordination. The processor(s) 130 and/or 132 may be configured to execute machine-readable instruction components readable instruction components - It should be appreciated that although machine-
readable instruction components FIG. 1 as being implemented within single processing units, in implementations in which processor(s) 130 and/or 132 include multiple processing units, one or more of machine-readable instruction components readable instruction components readable instruction components readable instruction components readable instruction components readable instruction components -
FIG. 2 illustrates amethod 200 for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms, in accordance with one or more implementations. The operations ofmethod 200 presented below are intended to be illustrative. In some implementations,method 200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations ofmethod 200 are illustrated inFIG. 2 and described below is not intended to be limiting. - In some implementations,
method 200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations ofmethod 200 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations ofmethod 200. - At an
operation 202, aclient computing platform 104A may perform deep neural network operations on structured data. The structured data may include one or more of images, images within images, symbols, logos, objects, video, audio, text, and/or other structured data obtained by or stored by theclient computing platform 104A. The deep neural network operations may be performed by a client application installed on theclient computing platform 104A. The client application may locally access the client-side machine learning model in order to perform the deep neural network operations.Operation 202 may be performed by one or more hardware processors configured to execute a machine-readable instruction component that is the same as or similar to client-side neural network component 108 (as described in connection withFIG. 1 ), in accordance with one or more implementations. - At an
operation 204, theclient computing platform 104A may improve the client-side machine learning model based on one or more of new model states received from other locations insystem 100, usage of the application, user interactions with the givenclient computing platform 104, and/or other information.Operation 204 may be performed by one or more hardware processors configured to execute a machine-readable instruction component that is the same as or similar to client-side learning component 110 (as described in connection withFIG. 1 ), in accordance with one or more implementations. - At an
operation 206, server(s) 102 may perform deep neural network operations on structured data. These deep neural network operations may be the same as or similar to the ones facilitated by the client-side machine learning model described in connection with client-sideneural network component 108.Operation 206 may be performed by one or more hardware processors configured to execute a machine-readable instruction component that is the same as or similar to cloud neural network component 116 (as described in connection withFIG. 1 ), in accordance with one or more implementations. - At an
operation 208, server(s) 102 may improve the cloud machine learning model. Such improvements may be made based on one or more of new model states received from other locations insystem 100, usage of the application, user interactions with client computing platform(s) 104, and/or other information.Operation 208 may be performed by one or more hardware processors configured to execute a machine-readable instruction component that is the same as or similar to cloud learning component 120 (as described in connection withFIG. 1 ), in accordance with one or more implementations. - At
operations client computing platform 104A, server(s) 102, and/or other client computing platform(s) 104X.Operations FIG. 1 ), in accordance with one or more implementations. - At
operations 212A, 212B, and/or 212C, information beyond the model states may be shared between the cloud machine learning model and the client-side machine learning model. Such information beyond the model states may include one or more of images, images within images, symbols, logos, objects, tags, videos, audio, geolocation, accelerometer data, metadata, and/or other information. In some implementations, sharing of information beyond the model states may be subject to end user consent. The sharing may occur betweenclient computing platform 104A, server(s) 102, and/or other client computing platform(s) 104X. In some implementations, sharing of some information beyond the model states may be one-way sharing from theclient computing platform 104.Operations 212A, 212B, and/or 212C may be performed by one or more hardware processors configured to execute a machine-readable instruction component that is the same as or similar to synchronization component 118 (as described in connection withFIG. 1 ), in accordance with one or more implementations. -
Operations system 100 can be recursive. - Although the present technology has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the technology is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present technology contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/115,395 US20210089918A1 (en) | 2016-09-26 | 2020-12-08 | Systems and methods for cooperative machine learning |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/276,655 US10867241B1 (en) | 2016-09-26 | 2016-09-26 | Systems and methods for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms |
US17/115,395 US20210089918A1 (en) | 2016-09-26 | 2020-12-08 | Systems and methods for cooperative machine learning |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/276,655 Continuation US10867241B1 (en) | 2016-09-26 | 2016-09-26 | Systems and methods for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210089918A1 true US20210089918A1 (en) | 2021-03-25 |
Family
ID=73746618
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/276,655 Active 2038-05-21 US10867241B1 (en) | 2016-09-26 | 2016-09-26 | Systems and methods for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms |
US17/115,395 Abandoned US20210089918A1 (en) | 2016-09-26 | 2020-12-08 | Systems and methods for cooperative machine learning |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/276,655 Active 2038-05-21 US10867241B1 (en) | 2016-09-26 | 2016-09-26 | Systems and methods for cooperative machine learning across multiple client computing platforms and the cloud enabling off-line deep neural network operations on client computing platforms |
Country Status (1)
Country | Link |
---|---|
US (2) | US10867241B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024134433A1 (en) * | 2022-12-22 | 2024-06-27 | Lumana Inc. | Hybrid machine learning architecture for visual content processing and uses thereof |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11537262B1 (en) | 2015-07-21 | 2022-12-27 | Monotype Imaging Inc. | Using attributes for font recommendations |
US11334750B2 (en) * | 2017-09-07 | 2022-05-17 | Monotype Imaging Inc. | Using attributes for predicting imagery performance |
US11657602B2 (en) | 2017-10-30 | 2023-05-23 | Monotype Imaging Inc. | Font identification from imagery |
CN113741863A (en) * | 2021-07-29 | 2021-12-03 | 南方电网深圳数字电网研究院有限公司 | Application program generation method based on algorithm model, electronic device and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110320520A1 (en) * | 2010-06-23 | 2011-12-29 | Microsoft Corporation | Dynamic partitioning of applications between clients and servers |
US20120191631A1 (en) * | 2011-01-26 | 2012-07-26 | Google Inc. | Dynamic Predictive Modeling Platform |
US20150170053A1 (en) * | 2013-12-13 | 2015-06-18 | Microsoft Corporation | Personalized machine learning models |
US20160350658A1 (en) * | 2015-06-01 | 2016-12-01 | Microsoft Technology Licensing, Llc | Viewport-based implicit feedback |
US20170366562A1 (en) * | 2016-06-15 | 2017-12-21 | Trustlook Inc. | On-Device Maliciousness Categorization of Application Programs for Mobile Devices |
US20180083898A1 (en) * | 2016-09-20 | 2018-03-22 | Google Llc | Suggested responses based on message stickers |
US9967226B2 (en) * | 2014-06-30 | 2018-05-08 | Microsoft Technology Licensing, Llc | Personalized delivery time optimization |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150081543A1 (en) * | 2013-09-16 | 2015-03-19 | International Business Machines Corporation | Analytics driven assessment of transactional risk daily limit exceptions |
-
2016
- 2016-09-26 US US15/276,655 patent/US10867241B1/en active Active
-
2020
- 2020-12-08 US US17/115,395 patent/US20210089918A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110320520A1 (en) * | 2010-06-23 | 2011-12-29 | Microsoft Corporation | Dynamic partitioning of applications between clients and servers |
US20120191631A1 (en) * | 2011-01-26 | 2012-07-26 | Google Inc. | Dynamic Predictive Modeling Platform |
US20150170053A1 (en) * | 2013-12-13 | 2015-06-18 | Microsoft Corporation | Personalized machine learning models |
US9967226B2 (en) * | 2014-06-30 | 2018-05-08 | Microsoft Technology Licensing, Llc | Personalized delivery time optimization |
US20160350658A1 (en) * | 2015-06-01 | 2016-12-01 | Microsoft Technology Licensing, Llc | Viewport-based implicit feedback |
US20170366562A1 (en) * | 2016-06-15 | 2017-12-21 | Trustlook Inc. | On-Device Maliciousness Categorization of Application Programs for Mobile Devices |
US20180083898A1 (en) * | 2016-09-20 | 2018-03-22 | Google Llc | Suggested responses based on message stickers |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024134433A1 (en) * | 2022-12-22 | 2024-06-27 | Lumana Inc. | Hybrid machine learning architecture for visual content processing and uses thereof |
Also Published As
Publication number | Publication date |
---|---|
US10867241B1 (en) | 2020-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210089918A1 (en) | Systems and methods for cooperative machine learning | |
US20210342745A1 (en) | Artificial intelligence model and data collection/development platform | |
US11681943B2 (en) | Artificial intelligence development via user-selectable/connectable model representations | |
US10728345B2 (en) | Field service management mobile offline synchronization | |
US10083054B2 (en) | Application-based computing resource management | |
KR20200094207A (en) | Methods and systems for generating personalized emoticons and lip syncing videos based on face recognition | |
US10616531B2 (en) | Video feeds in collaboration environments | |
US10652075B2 (en) | Systems and methods for selecting content items and generating multimedia content | |
US10798399B1 (en) | Adaptive video compression | |
US10268549B2 (en) | Heuristic process for inferring resource dependencies for recovery planning | |
US20180276478A1 (en) | Determining Most Representative Still Image of a Video for Specific User | |
US10318639B2 (en) | Intelligent action recommendation | |
US11159631B2 (en) | Integration of social interactions into media sharing | |
US10348663B2 (en) | Integration of social interactions into media sharing | |
US11574019B2 (en) | Prediction integration for data management platforms | |
CN111492394A (en) | Attendee engagement determination systems and methods | |
US20210117775A1 (en) | Automated selection of unannotated data for annotation based on features generated during training | |
US11061982B2 (en) | Social media tag suggestion based on product recognition | |
US10423821B2 (en) | Automated profile image generation based on scheduled video conferences | |
US10706074B2 (en) | Embeddings with multiple relationships | |
US9298744B2 (en) | Method and apparatus for ordering images in an image set based on social interactions and viewer preferences | |
US10140518B1 (en) | Content information auditing service | |
US20180061258A1 (en) | Data driven feature discovery | |
US10924586B2 (en) | Aggregating virtual reality (VR) sessions | |
US20230144585A1 (en) | Machine learning model change detection and versioning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CLARIFAI, INC., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROGERS, JOHN;MOST, KEVIN;ZEILER, MATTHEW;SIGNING DATES FROM 20150526 TO 20181224;REEL/FRAME:054581/0554 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |