[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Hu et al., 2022 - Google Patents

Learning from visual demonstrations via replayed task-contrastive model-agnostic meta-learning

Hu et al., 2022

Document ID
14491362352983589394
Author
Hu Z
Li W
Gan Z
Guo W
Zhu J
Wen J
Zhou D
Publication year
Publication venue
IEEE Transactions on Circuits and Systems for Video Technology

External Links

Snippet

With the increasing application of versatile robotics, the need for end-users to teach robotic tasks via visual/video demonstrations in different environments is increasing fast. One possible method is meta-learning. However, most meta-learning methods are tailored for …
Continue reading at ieeexplore.ieee.org (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6217Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06K9/6232Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods
    • G06K9/6247Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods based on an approximation criterion, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • G06N99/005Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computer systems utilising knowledge based models
    • G06N5/02Knowledge representation
    • G06N5/022Knowledge engineering, knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design

Similar Documents

Publication Publication Date Title
Sadeghi et al. Sim2real viewpoint invariant visual servoing by recurrent control
Dasari et al. Transformers for one-shot visual imitation
Newbury et al. Deep learning approaches to grasp synthesis: A review
Fang et al. Learning task-oriented grasping for tool manipulation from simulated self-supervision
Hundt et al. “good robot!”: Efficient reinforcement learning for multi-step visual tasks with sim to real transfer
Levine et al. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection
Savarimuthu et al. Teaching a robot the semantics of assembly tasks
Calinon et al. Compliant skills acquisition and multi-optima policy search with EM-based reinforcement learning
Sadeghi et al. Sim2real view invariant visual servoing by recurrent control
Wang et al. Rl-vlm-f: Reinforcement learning from vision language foundation model feedback
Liu et al. Frame mining: a free lunch for learning robotic manipulation from 3d point clouds
Fu et al. Active learning-based grasp for accurate industrial manipulation
Yang et al. Learning actions from human demonstration video for robotic manipulation
Aslan et al. New CNN and hybrid CNN-LSTM models for learning object manipulation of humanoid robots from demonstration
Bharadhwaj et al. Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation
Jiang et al. Mastering the complex assembly task with a dual-arm robot: A novel reinforcement learning method
Hu et al. Reboot: Reuse data for bootstrapping efficient real-world dexterous manipulation
Kim et al. Giving robots a hand: Learning generalizable manipulation with eye-in-hand human video demonstrations
Fu et al. In-context imitation learning via next-token prediction
Guo et al. Geometric task networks: Learning efficient and explainable skill coordination for object manipulation
Hu et al. Learning from visual demonstrations via replayed task-contrastive model-agnostic meta-learning
Hu et al. Learning with dual demonstration domains: Random domain-adaptive meta-learning
Pardowitz et al. Towards life-long learning in household robots: The piagetian approach
Bonsignorio et al. Deep learning and machine learning in robotics [from the guest editors]
Beltran-Hernandez et al. Learning to grasp with primitive shaped object policies