Hu et al., 2022 - Google Patents
Learning from visual demonstrations via replayed task-contrastive model-agnostic meta-learningHu et al., 2022
- Document ID
- 14491362352983589394
- Author
- Hu Z
- Li W
- Gan Z
- Guo W
- Zhu J
- Wen J
- Zhou D
- Publication year
- Publication venue
- IEEE Transactions on Circuits and Systems for Video Technology
External Links
Snippet
With the increasing application of versatile robotics, the need for end-users to teach robotic tasks via visual/video demonstrations in different environments is increasing fast. One possible method is meta-learning. However, most meta-learning methods are tailored for …
- 230000000007 visual effect 0 title abstract description 25
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06K9/6232—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods
- G06K9/6247—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods based on an approximation criterion, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/04—Architectures, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/02—Knowledge representation
- G06N5/022—Knowledge engineering, knowledge acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sadeghi et al. | Sim2real viewpoint invariant visual servoing by recurrent control | |
Dasari et al. | Transformers for one-shot visual imitation | |
Newbury et al. | Deep learning approaches to grasp synthesis: A review | |
Fang et al. | Learning task-oriented grasping for tool manipulation from simulated self-supervision | |
Hundt et al. | “good robot!”: Efficient reinforcement learning for multi-step visual tasks with sim to real transfer | |
Levine et al. | Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection | |
Savarimuthu et al. | Teaching a robot the semantics of assembly tasks | |
Calinon et al. | Compliant skills acquisition and multi-optima policy search with EM-based reinforcement learning | |
Sadeghi et al. | Sim2real view invariant visual servoing by recurrent control | |
Wang et al. | Rl-vlm-f: Reinforcement learning from vision language foundation model feedback | |
Liu et al. | Frame mining: a free lunch for learning robotic manipulation from 3d point clouds | |
Fu et al. | Active learning-based grasp for accurate industrial manipulation | |
Yang et al. | Learning actions from human demonstration video for robotic manipulation | |
Aslan et al. | New CNN and hybrid CNN-LSTM models for learning object manipulation of humanoid robots from demonstration | |
Bharadhwaj et al. | Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation | |
Jiang et al. | Mastering the complex assembly task with a dual-arm robot: A novel reinforcement learning method | |
Hu et al. | Reboot: Reuse data for bootstrapping efficient real-world dexterous manipulation | |
Kim et al. | Giving robots a hand: Learning generalizable manipulation with eye-in-hand human video demonstrations | |
Fu et al. | In-context imitation learning via next-token prediction | |
Guo et al. | Geometric task networks: Learning efficient and explainable skill coordination for object manipulation | |
Hu et al. | Learning from visual demonstrations via replayed task-contrastive model-agnostic meta-learning | |
Hu et al. | Learning with dual demonstration domains: Random domain-adaptive meta-learning | |
Pardowitz et al. | Towards life-long learning in household robots: The piagetian approach | |
Bonsignorio et al. | Deep learning and machine learning in robotics [from the guest editors] | |
Beltran-Hernandez et al. | Learning to grasp with primitive shaped object policies |