default search action
Jesse Thomason
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c47]Tejas Srinivasan, Jack Hessel, Tanmay Gupta, Bill Yuchen Lin, Yejin Choi, Jesse Thomason, Khyathi Raghavi Chandu:
Selective "Selective Prediction": Reducing Unnecessary Abstention in Vision-Language Reasoning. ACL (Findings) 2024: 12935-12948 - [c46]Ting-Yun Chang, Jesse Thomason, Robin Jia:
When Parts Are Greater Than Sums: Individual LLM Components Can Outperform Full Models. EMNLP 2024: 10280-10299 - [c45]Chancharik Mitra, Abrar Anwar, Rodolfo Corona, Dan Klein, Trevor Darrell, Jesse Thomason:
Which One? Leveraging Context Between Objects and Multiple Views for Language Grounding. NAACL-HLT 2024: 3177-3189 - [c44]Ting-Yun Chang, Jesse Thomason, Robin Jia:
Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale of Two Benchmarks. NAACL-HLT 2024: 3190-3211 - [c43]Wang Zhu, Alekh Agarwal, Mandar Joshi, Robin Jia, Jesse Thomason, Kristina Toutanova:
Efficient End-to-End Visual Document Understanding with Rationale Distillation. NAACL-HLT 2024: 8401-8424 - [i48]Wilbert Pumacay, Ishika Singh, Jiafei Duan, Ranjay Krishna, Jesse Thomason, Dieter Fox:
THE COLOSSEUM: A Benchmark for Evaluating Generalization for Robotic Manipulation. CoRR abs/2402.08191 (2024) - [i47]Woojeong Jin, Tejas Srinivasan, Jesse Thomason, Xiang Ren:
WinoViz: Probing Visual Properties of Objects Under Different States. CoRR abs/2402.13584 (2024) - [i46]Tejas Srinivasan, Jack Hessel, Tanmay Gupta, Bill Yuchen Lin, Yejin Choi, Jesse Thomason, Khyathi Raghavi Chandu:
Selective "Selective Prediction": Reducing Unnecessary Abstention in Vision-Language Reasoning. CoRR abs/2402.15610 (2024) - [i45]Anthony Liang, Jesse Thomason, Erdem Biyik:
ViSaRL: Visual Reinforcement Learning Guided by Human Saliency. CoRR abs/2403.10940 (2024) - [i44]Ishika Singh, David Traum, Jesse Thomason:
TwoStep: Multi-agent Task Planning using Classical Planners and Large Language Models. CoRR abs/2403.17246 (2024) - [i43]Wang Zhu, Ishika Singh, Robin Jia, Jesse Thomason:
Language Models can Infer Action Semantics for Classical Planners from Environment Feedback. CoRR abs/2406.02791 (2024) - [i42]Ting-Yun Chang, Jesse Thomason, Robin Jia:
When Parts are Greater Than Sums: Individual LLM Components Can Outperform Full Models. CoRR abs/2406.13131 (2024) - [i41]Abrar Anwar, Rohan Gupta, Jesse Thomason:
Contrast Sets for Evaluating Language-Guided Robot Policies. CoRR abs/2406.13636 (2024) - [i40]Zain Merchant, Abrar Anwar, Emily Wang, Souti Chattopadhyay, Jesse Thomason:
Generating Contextually-Relevant Navigation Instructions for Blind and Low Vision People. CoRR abs/2407.08219 (2024) - 2023
- [j4]Xiaohan Zhang, Saeid Amiri, Jivko Sinapov, Jesse Thomason, Peter Stone, Shiqi Zhang:
Multimodal embodied attribute learning by robots for object-centric action policies. Auton. Robots 47(5): 505-528 (2023) - [j3]Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg:
ProgPrompt: program generation for situated robot task planning using large language models. Auton. Robots 47(8): 999-1012 (2023) - [c42]Lee Kezar, Jesse Thomason, Naomi Caselli, Zed Sevcikova Sehyr, Elana Pontecorvo:
The Sem-Lex Benchmark: Modeling ASL Signs and their Phonemes. ASSETS 2023: 34:1-34:10 - [c41]Tejas Srinivasan, Furong Jia, Mohammad Rostami, Jesse Thomason:
I2I: Initializing Adapters with Improvised Knowledge. CoLLAs 2023: 923-935 - [c40]Tejas Srinivasan, Xiang Ren, Jesse Thomason:
Curriculum Learning for Data-Efficient Vision-Language Alignment. CVPR Workshops 2023: 5619-5624 - [c39]Jacob Krantz, Shurjo Banerjee, Wang Zhu, Jason J. Corso, Peter Anderson, Stefan Lee, Jesse Thomason:
Iterative Vision-and-Language Navigation. CVPR 2023: 14921-14930 - [c38]Lee Kezar, Jesse Thomason, Zed Sevcikova Sehyr:
Improving Sign Recognition with Phonology. EACL 2023: 2724-2729 - [c37]Yuliang Cai, Jesse Thomason, Mohammad Rostami:
Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation. EMNLP (Findings) 2023: 6986-7000 - [c36]Wang Zhu, Jesse Thomason, Robin Jia:
Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering. EMNLP 2023: 8845-8860 - [c35]Lee Kezar, Tejas Srinivasan, Riley Carlin, Jesse Thomason, Zed Sevcikova Sehyr, Naomi Caselli:
Exploring Strategies for Modeling Sign Language Phonology. ESANN 2023 - [c34]Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg:
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models. ICRA 2023: 11523-11530 - [c33]Allen Chang, Xiaoyuan Zhu, Aarav Monga, Seoho Ahn, Tejas Srinivasan, Jesse Thomason:
Multimodal Speech Recognition for Language-Guided Embodied Agents. INTERSPEECH 2023: 1608-1612 - [c32]Gunnar A. Sigurdsson, Jesse Thomason, Gaurav S. Sukhatme, Robinson Piramuthu:
RREx-BoT: Remote Referring Expressions with a Bag of Tricks. IROS 2023: 5203-5210 - [c31]Julie Jiang, Jesse Thomason, Francesco Barbieri, Emilio Ferrara:
Geolocated Social Media Posts are Happier: Understanding the Characteristics of Check-in Posts on Twitter. WebSci 2023: 136-146 - [i39]Gunnar A. Sigurdsson, Jesse Thomason, Gaurav S. Sukhatme, Robinson Piramuthu:
RREx-BoT: Remote Referring Expressions with a Bag of Tricks. CoRR abs/2301.12614 (2023) - [i38]Lee Kezar, Jesse Thomason, Zed Sevcikova Sehyr:
Improving Sign Recognition with Phonology. CoRR abs/2302.05759 (2023) - [i37]Allen Chang, Xiaoyuan Zhu, Aarav Monga, Seoho Ahn, Tejas Srinivasan, Jesse Thomason:
Multimodal Speech Recognition for Language-Guided Embodied Agents. CoRR abs/2302.14030 (2023) - [i36]Yuliang Cai, Jesse Thomason, Mohammad Rostami:
Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation. CoRR abs/2303.14423 (2023) - [i35]Tejas Srinivasan, Furong Jia, Mohammad Rostami, Jesse Thomason:
I2I: Initializing Adapters with Improvised Knowledge. CoRR abs/2304.02168 (2023) - [i34]Wang Zhu, Jesse Thomason, Robin Jia:
Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering. CoRR abs/2305.14901 (2023) - [i33]Lee Kezar, Riley Carlin, Tejas Srinivasan, Zed Sevcikova Sehyr, Naomi Caselli, Jesse Thomason:
Exploring Strategies for Modeling Sign Language Phonology. CoRR abs/2310.00195 (2023) - [i32]Lee Kezar, Elana Pontecorvo, Adele Daniels, Connor Baer, Ruth Ferster, Lauren Berger, Jesse Thomason, Zed Sevcikova Sehyr, Naomi Caselli:
The Sem-Lex Benchmark: Modeling ASL Signs and Their Phonemes. CoRR abs/2310.00196 (2023) - [i31]Chancharik Mitra, Abrar Anwar, Rodolfo Corona, Dan Klein, Trevor Darrell, Jesse Thomason:
Comparative Multi-View Language Grounding. CoRR abs/2311.06694 (2023) - [i30]Ting-Yun Chang, Jesse Thomason, Robin Jia:
Do Localization Methods Actually Localize Memorized Data in LLMs? CoRR abs/2311.09060 (2023) - [i29]Wang Zhu, Alekh Agarwal, Mandar Joshi, Robin Jia, Jesse Thomason, Kristina Toutanova:
Efficient End-to-End Visual Document Understanding with Rationale Distillation. CoRR abs/2311.09612 (2023) - [i28]Wang Zhu, Ishika Singh, Yuan Huang, Robin Jia, Jesse Thomason:
Does VLN Pretraining Work with Nonsensical or Irrelevant Instructions? CoRR abs/2311.17280 (2023) - 2022
- [c30]Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gökhan Tür, Dilek Hakkani-Tür:
TEACh: Task-Driven Embodied Agents That Chat. AAAI 2022: 2017-2025 - [c29]Jing Gu, Eliana Stefani, Qi Wu, Jesse Thomason, Xin Wang:
Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions. ACL (1) 2022: 7606-7623 - [c28]Wang Zhu, Jesse Thomason, Robin Jia:
Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems. EMNLP (Findings) 2022: 4697-4711 - [c27]Arjun R. Akula, Spandana Gella, Aishwarya Padmakumar, Mahdi Namazifar, Mohit Bansal, Jesse Thomason, Dilek Hakkani-Tur:
ALFRED-L: Investigating the Role of Language for Action Learning in Interactive Visual Environments. EMNLP 2022: 9369-9378 - [c26]Arjun Majumdar, Gunnar A. Sigurdsson, Robinson Piramuthu, Jesse Thomason, Dhruv Batra, Gaurav S. Sukhatme:
SSL Enables Learning from Sparse Rewards in Image-Goal Navigation. ICML 2022: 14774-14785 - [c25]Tejas Srinivasan, Ting-Yun Chang, Leticia Leonor Pinto Alva, Georgios Chochlakis, Mohammad Rostami, Jesse Thomason:
CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks. NeurIPS 2022 - [i27]Jing Gu, Eliana Stefani, Qi Wu, Jesse Thomason, Xin Eric Wang:
Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions. CoRR abs/2203.12667 (2022) - [i26]Tejas Srinivasan, Ting-Yun Chang, Leticia Leonor Pinto Alva, Georgios Chochlakis, Mohammad Rostami, Jesse Thomason:
CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks. CoRR abs/2206.09059 (2022) - [i25]Sara Mohammadinejad, Jesse Thomason, Jyotirmoy V. Deshmukh:
Interactive Learning from Natural Language and Demonstrations using Signal Temporal Logic. CoRR abs/2207.00627 (2022) - [i24]Julie Jiang, Jesse Thomason, Francesco Barbieri, Emilio Ferrara:
Geolocated Social Media Posts are Happier: Understanding the Characteristics of Check-in Posts on Twitter. CoRR abs/2207.10887 (2022) - [i23]Tejas Srinivasan, Xiang Ren, Jesse Thomason:
Curriculum Learning for Data-Efficient Vision-Language Alignment. CoRR abs/2207.14525 (2022) - [i22]Georgios Chochlakis, Tejas Srinivasan, Jesse Thomason, Shrikanth Narayanan:
VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations. CoRR abs/2208.09021 (2022) - [i21]Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg:
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models. CoRR abs/2209.11302 (2022) - [i20]Jacob Krantz, Shurjo Banerjee, Wang Zhu, Jason J. Corso, Peter Anderson, Stefan Lee, Jesse Thomason:
Iterative Vision-and-Language Navigation. CoRR abs/2210.03087 (2022) - [i19]Matt Deitke, Dhruv Batra, Yonatan Bisk, Tommaso Campari, Angel X. Chang, Devendra Singh Chaplot, Changan Chen, Claudia Pérez-D'Arpino, Kiana Ehsani, Ali Farhadi, Li Fei-Fei, Anthony G. Francis, Chuang Gan, Kristen Grauman, David Hall, Winson Han, Unnat Jain, Aniruddha Kembhavi, Jacob Krantz, Stefan Lee, Chengshu Li, Sagnik Majumder, Oleksandr Maksymets, Roberto Martín-Martín, Roozbeh Mottaghi, Sonia Raychaudhuri, Mike Roberts, Silvio Savarese, Manolis Savva, Mohit Shridhar, Niko Sünderhauf, Andrew Szot, Ben Talbot, Joshua B. Tenenbaum, Jesse Thomason, Alexander Toshev, Joanne Truong, Luca Weihs, Jiajun Wu:
Retrospectives on the Embodied AI Workshop. CoRR abs/2210.06849 (2022) - [i18]Wang Zhu, Jesse Thomason, Robin Jia:
Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems. CoRR abs/2210.15037 (2022) - [i17]Vishnu Sashank Dorbala, Gunnar A. Sigurdsson, Robinson Piramuthu, Jesse Thomason, Gaurav S. Sukhatme:
CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation. CoRR abs/2211.16649 (2022) - 2021
- [c24]Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer:
Language Grounding with 3D Objects. CoRL 2021: 1691-1701 - [i16]Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer:
Language Grounding with 3D Objects. CoRR abs/2107.12514 (2021) - [i15]Alessandro Suglia, Qiaozi Gao, Jesse Thomason, Govind Thattai, Gaurav S. Sukhatme:
Embodied BERT: A Transformer Model for Embodied, Language-guided Visual Task Completion. CoRR abs/2108.04927 (2021) - [i14]Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gökhan Tür, Dilek Hakkani-Tür:
TEACh: Task-driven Embodied Agents that Chat. CoRR abs/2110.00534 (2021) - [i13]Yizhou Zhao, Kaixiang Lin, Zhiwei Jia, Qiaozi Gao, Govind Thattai, Jesse Thomason, Gaurav S. Sukhatme:
LUMINOUS: Indoor Scene Generation for Embodied AI Challenges. CoRR abs/2111.05527 (2021) - 2020
- [j2]Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin W. Hart, Peter Stone, Raymond J. Mooney:
Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog. J. Artif. Intell. Res. 67: 327-374 (2020) - [c23]Shurjo Banerjee, Jesse Thomason, Jason J. Corso:
The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation. CoRL 2020: 1384-1393 - [c22]Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox:
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. CVPR 2020: 10737-10746 - [c21]Homero Roman Roman, Yonatan Bisk, Jesse Thomason, Asli Celikyilmaz, Jianfeng Gao:
RMM: A Recursive Mental Model for Dialog Navigation. EMNLP (Findings) 2020: 1732-1745 - [c20]Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, Joseph P. Turian:
Experience Grounds Language. EMNLP (1) 2020: 8718-8735 - [c19]Collin Burns, Jesse Thomason, Wesley Tansey:
Interpreting Black Box Models via Hypothesis Testing. FODS 2020: 47-57 - [i12]Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, Joseph P. Turian:
Experience Grounds Language. CoRR abs/2004.10151 (2020) - [i11]Homero Roman Roman, Yonatan Bisk, Jesse Thomason, Asli Celikyilmaz, Jianfeng Gao:
RMM: A Recursive Mental Model for Dialog Navigation. CoRR abs/2005.00728 (2020) - [i10]Shurjo Banerjee, Jesse Thomason, Jason J. Corso:
The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation. CoRR abs/2010.12639 (2020)
2010 – 2019
- 2019
- [c18]Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer:
Vision-and-Dialog Navigation. CoRL 2019: 394-406 - [c17]Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin W. Hart, Peter Stone, Raymond J. Mooney:
Improving Grounded Natural Language Understanding through Human-Robot Dialog. ICRA 2019: 6934-6941 - [c16]Chris Paxton, Yonatan Bisk, Jesse Thomason, Arunkumar Byravan, Dieter Fox:
Prospection: Interpretable plans from language by predicting the future. ICRA 2019: 6942-6948 - [c15]Saeid Amiri, Sujay Bajracharya, Cihangir Goktolga, Jesse Thomason, Shiqi Zhang:
Augmenting Knowledge through Statistical, Goal-oriented Human-Robot Dialog. IROS 2019: 744-750 - [c14]Rosario Scalise, Jesse Thomason, Yonatan Bisk, Siddhartha S. Srinivasa:
Improving Robot Success Detection using Static Object Data. IROS 2019: 4229-4235 - [c13]Jesse Thomason, Daniel Gordon, Yonatan Bisk:
Shifting the Baseline: Single Modality Performance on Visual Navigation & QA. NAACL-HLT (1) 2019: 1977-1983 - [i9]Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin W. Hart, Peter Stone, Raymond J. Mooney:
Improving Grounded Natural Language Understanding through Human-Robot Dialog. CoRR abs/1903.00122 (2019) - [i8]Chris Paxton, Yonatan Bisk, Jesse Thomason, Arunkumar Byravan, Dieter Fox:
Prospection: Interpretable Plans From Language By Predicting the Future. CoRR abs/1903.08309 (2019) - [i7]Collin Burns, Jesse Thomason, Wesley Tansey:
Interpreting Black Box Models with Statistical Guarantees. CoRR abs/1904.00045 (2019) - [i6]Rosario Scalise, Jesse Thomason, Yonatan Bisk, Siddhartha S. Srinivasa:
Improving Robot Success Detection using Static Object Data. CoRR abs/1904.01650 (2019) - [i5]Saeid Amiri, Sujay Bajracharya, Cihangir Goktolga, Jesse Thomason, Shiqi Zhang:
Augmenting Knowledge through Statistical, Goal-oriented Human-Robot Dialog. CoRR abs/1907.03390 (2019) - [i4]Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer:
Vision-and-Dialog Navigation. CoRR abs/1907.04957 (2019) - [i3]Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox:
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. CoRR abs/1912.01734 (2019) - 2018
- [c12]Wesley Tansey, Jesse Thomason, James G. Scott:
Maximum-Variance Total Variation Denoising for Interpretable Spatial Smoothing. AAAI 2018: 2460-2467 - [c11]Jesse Thomason, Jivko Sinapov, Raymond J. Mooney, Peter Stone:
Guiding Exploratory Behaviors for Multi-Modal Grounding of Linguistic Descriptions. AAAI 2018: 5520-5527 - [c10]Saeid Amiri, Suhua Wei, Shiqi Zhang, Jivko Sinapov, Jesse Thomason, Peter Stone:
Robot Behavioral Exploration and Multi-modal Perception using Dynamically Constructed Controllers. AAAI Spring Symposia 2018 - [c9]Saeid Amiri, Suhua Wei, Shiqi Zhang, Jivko Sinapov, Jesse Thomason, Peter Stone:
Multi-modal Predicate Identification using Dynamically Learned Robot Controllers. IJCAI 2018: 4638-4645 - [i2]Justin W. Hart, Harel Yedidsion, Yuqian Jiang, Nick Walker, Rishi Shah, Jesse Thomason, Aishwarya Padmakumar, Rolando Fernandez, Jivko Sinapov, Raymond J. Mooney, Peter Stone:
Interaction and Autonomy in RoboCup@Home and Building-Wide Intelligence. CoRR abs/1810.02919 (2018) - [i1]Jesse Thomason, Daniel Gordon, Yonatan Bisk:
Shifting the Baseline: Single Modality Performance on Visual Navigation & QA. CoRR abs/1811.00613 (2018) - 2017
- [j1]Piyush Khandelwal, Shiqi Zhang, Jivko Sinapov, Matteo Leonetti, Jesse Thomason, Fangkai Yang, Ilaria Gori, Maxwell Svetlik, Priyanka Khante, Vladimir Lifschitz, J. K. Aggarwal, Raymond J. Mooney, Peter Stone:
BWIBots: A platform for bridging the gap between AI and human-robot interaction research. Int. J. Robotics Res. 36(5-7): 635-659 (2017) - [c8]Jesse Thomason, Jivko Sinapov, Raymond J. Mooney:
Guiding Interaction Behaviors for Multi-modal Grounded Language Learning. RoboNLP@ACL 2017: 20-24 - [c7]Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Justin W. Hart, Peter Stone, Raymond J. Mooney:
Opportunistic Active Learning for Grounding Natural Language Descriptions. CoRL 2017: 67-76 - [c6]Aishwarya Padmakumar, Jesse Thomason, Raymond J. Mooney:
Integrated Learning of Dialog Strategies and Semantic Parsing. EACL (1) 2017: 547-557 - [c5]Jesse Thomason, Raymond J. Mooney:
Multi-Modal Word Synset Induction. IJCAI 2017: 4116-4122 - [c4]Rodolfo Corona, Jesse Thomason, Raymond J. Mooney:
Improving Black-box Speech Recognition using Semantic Parsing. IJCNLP(2) 2017: 122-127 - 2016
- [c3]Jesse Thomason, Jivko Sinapov, Maxwell Svetlik, Peter Stone, Raymond J. Mooney:
Learning Multi-Modal Grounded Linguistic Semantics by Playing "I Spy". IJCAI 2016: 3477-3483 - 2015
- [c2]Jesse Thomason, Shiqi Zhang, Raymond J. Mooney, Peter Stone:
Learning to Interpret Natural Language Commands through Human-Robot Dialog. IJCAI 2015: 1923-1929 - 2014
- [c1]Jesse Thomason, Subhashini Venugopalan, Sergio Guadarrama, Kate Saenko, Raymond J. Mooney:
Integrating Language and Vision to Generate Natural Language Descriptions of Videos in the Wild. COLING 2014: 1218-1227
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-20 21:01 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint