default search action
Daniel S. Brown
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j7]Gersi Doko, Guang Yang, Daniel S. Brown, Marek Petrik:
ROIL: Robust Offline Imitation Learning without Trajectories. RLJ 2: 593-605 (2024) - [j6]Connor Mattson, Anurag Aribandi, Daniel S. Brown:
Representation Alignment from Human Feedback for Cross-Embodiment Reward Learning from Mixed-Quality Demonstrations. RLJ 4: 1822-1840 (2024) - [j5]Akansha Kalra, Daniel S. Brown:
Can Differentiable Decision Trees Enable Interpretable Reward Learning from Human Feedback? RLJ 4: 1887-1910 (2024) - [c43]Tu Trinh, Haoyu Chen, Daniel S. Brown:
Autonomous Assessment of Demonstration Sufficiency via Bayesian Inverse Reinforcement Learning. HRI 2024: 725-733 - [c42]Dimitris Papadimitriou, Daniel S. Brown:
Bayesian Constraint Inference from User Demonstrations Based on Margin-Respecting Preference Models. ICRA 2024: 15039-15046 - [c41]Zohre Karimi, Shing-Hei Ho, Bao Thach, Alan Kuntz, Daniel S. Brown:
Reward Learning from Suboptimal Demonstrations with Applications in Surgical Electrocautery. ISMR 2024: 1-7 - [c40]Jordan Thompson, Brian Y. Cho, Daniel S. Brown, Alan Kuntz:
Modeling Kinematic Uncertainty of Tendon-Driven Continuum Robots via Mixture Density Networks. ISMR 2024: 1-7 - [i42]Dimitris Papadimitriou, Daniel S. Brown:
Bayesian Constraint Inference from User Demonstrations Based on Margin-Respecting Preference Models. CoRR abs/2403.02431 (2024) - [i41]Jordan Thompson, Brian Y. Cho, Daniel S. Brown, Alan Kuntz:
Modeling Kinematic Uncertainty of Tendon-Driven Continuum Robots via Mixture Density Networks. CoRR abs/2404.04241 (2024) - [i40]Zohre Karimi, Shing-Hei Ho, Bao Thach, Alan Kuntz, Daniel S. Brown:
Reward Learning from Suboptimal Demonstrations with Applications in Surgical Electrocautery. CoRR abs/2404.07185 (2024) - [i39]Connor Mattson, Anurag Aribandi, Daniel S. Brown:
Representation Alignment from Human Feedback for Cross-Embodiment Reward Learning from Mixed-Quality Demonstrations. CoRR abs/2408.05610 (2024) - [i38]Kevin Zhu, Connor Mattson, Shay Snyder, Ricardo Vega, Daniel S. Brown, Maryam Parsa, Cameron Nowzari:
Spiking Neural Networks as a Controller for Emergent Swarm Agents. CoRR abs/2410.16175 (2024) - [i37]Ricardo Vega, Kevin Zhu, Connor Mattson, Daniel S. Brown, Cameron Nowzari:
Agent-Based Emulation for Deploying Robot Swarm Behaviors. CoRR abs/2410.16444 (2024) - 2023
- [j4]Daniel Shin, Anca D. Dragan, Daniel S. Brown:
Benchmarks and Algorithms for Offline Preference-Based Reward Learning. Trans. Mach. Learn. Res. 2023 (2023) - [c39]Gaurav R. Ghosal, Matthew Zurek, Daniel S. Brown, Anca D. Dragan:
The Effect of Modeling Human Rationality Level on Learning Rewards from Multiple Feedback Types. AAAI 2023: 5983-5992 - [c38]Nancy N. Blackburn, M. Gardone, Daniel S. Brown:
Player-Centric Procedural Content Generation: Enhancing Runtime Customization by Integrating Real-Time Player Feedback. CHI PLAY (Companion) 2023: 10-16 - [c37]Jerry Zhi-Yang He, Daniel S. Brown, Zackory Erickson, Anca D. Dragan:
Quantifying Assistive Robustness Via the Natural-Adversarial Frontier. CoRL 2023: 1865-1886 - [c36]Connor Mattson, Daniel S. Brown:
Leveraging Human Feedback to Evolve and Discover Novel Emergent Behaviors in Robot Swarms. GECCO 2023: 56-64 - [c35]Andreea Bobu, Yi Liu, Rohin Shah, Daniel S. Brown, Anca D. Dragan:
SIRL: Similarity-based Implicit Representation Learning. HRI 2023: 565-574 - [c34]Jeremy Tien, Jerry Zhi-Yang He, Zackory Erickson, Anca D. Dragan, Daniel S. Brown:
Causal Confusion and Reward Misidentification in Preference-Based Reward Learning. ICLR 2023 - [c33]Gaurav Rohit Ghosal, Amrith Setlur, Daniel S. Brown, Anca D. Dragan, Aditi Raghunathan:
Contextual Reliability: When Different Features Matter in Different Contexts. ICML 2023: 11300-11320 - [c32]Yi Liu, Gaurav Datta, Ellen R. Novoseller, Daniel S. Brown:
Efficient Preference-Based Reinforcement Learning Using Learned Dynamics Models. ICRA 2023: 2921-2928 - [c31]Connor Mattson, Jeremy C. Clark, Daniel S. Brown:
Exploring Behavior Discovery Methods for Heterogeneous Swarms of Limited-Capability Robots. MRS 2023: 163-169 - [i36]Andreea Bobu, Yi Liu, Rohin Shah, Daniel S. Brown, Anca D. Dragan:
SIRL: Similarity-based Implicit Representation Learning. CoRR abs/2301.00810 (2023) - [i35]Daniel Shin, Anca D. Dragan, Daniel S. Brown:
Benchmarks and Algorithms for Offline Preference-Based Reward Learning. CoRR abs/2301.01392 (2023) - [i34]Yi Liu, Gaurav Datta, Ellen R. Novoseller, Daniel S. Brown:
Efficient Preference-Based Reinforcement Learning Using Learned Dynamics Models. CoRR abs/2301.04741 (2023) - [i33]Connor Mattson, Daniel S. Brown:
Leveraging Human Feedback to Evolve and Discover Novel Emergent Behaviors in Robot Swarms. CoRR abs/2305.16148 (2023) - [i32]Akansha Kalra, Daniel S. Brown:
Can Differentiable Decision Trees Learn Interpretable Reward Functions? CoRR abs/2306.13004 (2023) - [i31]Gaurav R. Ghosal, Amrith Setlur, Daniel S. Brown, Anca D. Dragan, Aditi Raghunathan:
Contextual Reliability: When Different Features Matter in Different Contexts. CoRR abs/2307.10026 (2023) - [i30]Ricardo Vega, Kevin Zhu, Connor Mattson, Daniel S. Brown, Cameron Nowzari:
Swarm Mechanics and Swarm Chemistry: A Transdisciplinary Approach for Robot Swarms. CoRR abs/2309.11408 (2023) - [i29]Jerry Zhi-Yang He, Zackory Erickson, Daniel S. Brown, Anca D. Dragan:
Quantifying Assistive Robustness Via the Natural-Adversarial Frontier. CoRR abs/2310.10610 (2023) - [i28]Connor Mattson, Jeremy C. Clark, Daniel S. Brown:
Exploring Behavior Discovery Methods for Heterogeneous Swarms of Limited-Capability Robots. CoRR abs/2310.16941 (2023) - 2022
- [j3]Dimitris Papadimitriou, Usman Anwar, Daniel S. Brown:
Bayesian Methods for Constraint Inference in Reinforcement Learning. Trans. Mach. Learn. Res. 2022 (2022) - [c30]Satvik Sharma, Ellen R. Novoseller, Vainavi Viswanath, Zaynah Javed, Rishi Parikh, Ryan Hoque, Ashwin Balakrishna, Daniel S. Brown, Ken Goldberg:
Learning Switching Criteria for Sim2Real Transfer of Robotic Fabric Manipulation Policies. CASE 2022: 1116-1123 - [c29]Jerry Zhi-Yang He, Zackory Erickson, Daniel S. Brown, Aditi Raghunathan, Anca D. Dragan:
Learning Representations that Enable Generalization in Assistive Tasks. CoRL 2022: 2105-2114 - [c28]Letian Fu, Michael Danielczuk, Ashwin Balakrishna, Daniel S. Brown, Jeffrey Ichnowski, Eugen Solowjow, Ken Goldberg:
LEGS: Learning Efficient Grasp Sets for Exploratory Grasping. ICRA 2022: 8259-8265 - [c27]Arjun Sripathy, Andreea Bobu, Zhongyu Li, Koushil Sreenath, Daniel S. Brown, Anca D. Dragan:
Teaching Robots to Span the Space of Functional Expressive Motion. IROS 2022: 13406-13413 - [c26]Albert Wilcox, Ashwin Balakrishna, Jules Dedieu, Wyame Benslimane, Daniel S. Brown, Ken Goldberg:
Monte Carlo Augmented Actor-Critic for Sparse Reward Deep Reinforcement Learning from Suboptimal Demonstrations. NeurIPS 2022 - [i27]Arjun Sripathy, Andreea Bobu, Zhongyu Li, Koushil Sreenath, Daniel S. Brown, Anca D. Dragan:
Teaching Robots to Span the Space of Functional Expressive Motion. CoRR abs/2203.02091 (2022) - [i26]Jeremy Tien, Jerry Zhi-Yang He, Zackory Erickson, Anca D. Dragan, Daniel S. Brown:
A Study of Causal Confusion in Preference-Based Reward Learning. CoRR abs/2204.06601 (2022) - [i25]Satvik Sharma, Ellen R. Novoseller, Vainavi Viswanath, Zaynah Javed, Rishi Parikh, Ryan Hoque, Ashwin Balakrishna, Daniel S. Brown, Ken Goldberg:
Learning Switching Criteria for Sim2Real Transfer of Robotic Fabric Manipulation Policies. CoRR abs/2207.00911 (2022) - [i24]Gaurav R. Ghosal, Matthew Zurek, Daniel S. Brown, Anca D. Dragan:
The Effect of Modeling Human Rationality Level on Learning Rewards from Multiple Feedback Types. CoRR abs/2208.10687 (2022) - [i23]Albert Wilcox, Ashwin Balakrishna, Jules Dedieu, Wyame Benslimane, Daniel S. Brown, Ken Goldberg:
Monte Carlo Augmented Actor-Critic for Sparse Reward Deep Reinforcement Learning from Suboptimal Demonstrations. CoRR abs/2210.07432 (2022) - [i22]Tu Trinh, Daniel S. Brown:
Autonomous Assessment of Demonstration Sufficiency via Bayesian Inverse Reinforcement Learning. CoRR abs/2211.15542 (2022) - [i21]Jerry Zhi-Yang He, Aditi Raghunathan, Daniel S. Brown, Zackory Erickson, Anca D. Dragan:
Learning Representations that Enable Generalization in Assistive Tasks. CoRR abs/2212.03175 (2022) - 2021
- [c25]Ryan Hoque, Ashwin Balakrishna, Carl Putterman, Michael Luo, Daniel S. Brown, Daniel Seita, Brijen Thananjeyan, Ellen R. Novoseller, Ken Goldberg:
LazyDAgger: Reducing Context Switching in Interactive Imitation Learning. CASE 2021: 502-509 - [c24]Shivin Devgon, Jeffrey Ichnowski, Michael Danielczuk, Daniel S. Brown, Ashwin Balakrishna, Shirin Joshi, Eduardo M. C. Rocha, Eugen Solowjow, Ken Goldberg:
Kit-Net: Self-Supervised Learning to Kit Novel 3D Objects into Novel 3D Cavities. CASE 2021: 1124-1131 - [c23]Ryan Hoque, Ashwin Balakrishna, Ellen R. Novoseller, Albert Wilcox, Daniel S. Brown, Ken Goldberg:
ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning. CoRL 2021: 598-608 - [c22]Daniel S. Brown, Jordan Schneider, Anca D. Dragan, Scott Niekum:
Value Alignment Verification. ICML 2021: 1105-1115 - [c21]Zaynah Javed, Daniel S. Brown, Satvik Sharma, Jerry Zhu, Ashwin Balakrishna, Marek Petrik, Anca D. Dragan, Ken Goldberg:
Policy Gradient Bayesian Robust Optimization for Imitation Learning. ICML 2021: 4785-4796 - [c20]Matthew Zurek, Andreea Bobu, Daniel S. Brown, Anca D. Dragan:
Situational Confidence Assistance for Lifelong Shared Autonomy. ICRA 2021: 2783-2789 - [c19]Arjun Sripathy, Andreea Bobu, Daniel S. Brown, Anca D. Dragan:
Dynamically Switching Human Prediction Models for Efficient Planning. ICRA 2021: 3495-3501 - [c18]Avik Jain, Lawrence Chan, Daniel S. Brown, Anca D. Dragan:
Optimal Cost Design for Model Predictive Control. L4DC 2021: 1205-1217 - [i20]Arjun Sripathy, Andreea Bobu, Daniel S. Brown, Anca D. Dragan:
Dynamically Switching Human Prediction Models for Efficient Planning. CoRR abs/2103.07815 (2021) - [i19]Ryan Hoque, Ashwin Balakrishna, Carl Putterman, Michael Luo, Daniel S. Brown, Daniel Seita, Brijen Thananjeyan, Ellen R. Novoseller, Ken Goldberg:
LazyDAgger: Reducing Context Switching in Interactive Imitation Learning. CoRR abs/2104.00053 (2021) - [i18]Matthew Zurek, Andreea Bobu, Daniel S. Brown, Anca D. Dragan:
Situational Confidence Assistance for Lifelong Shared Autonomy. CoRR abs/2104.06556 (2021) - [i17]Avik Jain, Lawrence Chan, Daniel S. Brown, Anca D. Dragan:
Optimal Cost Design for Model Predictive Control. CoRR abs/2104.11353 (2021) - [i16]Zaynah Javed, Daniel S. Brown, Satvik Sharma, Jerry Zhu, Ashwin Balakrishna, Marek Petrik, Anca D. Dragan, Ken Goldberg:
Policy Gradient Bayesian Robust Optimization for Imitation Learning. CoRR abs/2106.06499 (2021) - [i15]Shivin Devgon, Jeffrey Ichnowski, Michael Danielczuk, Daniel S. Brown, Ashwin Balakrishna, Shirin Joshi, Eduardo M. C. Rocha, Eugen Solowjow, Ken Goldberg:
Kit-Net: Self-Supervised Learning to Kit Novel 3D Objects into Novel 3D Cavities. CoRR abs/2107.05789 (2021) - [i14]Daniel Shin, Daniel S. Brown:
Offline Preference-Based Apprenticeship Learning. CoRR abs/2107.09251 (2021) - [i13]Ryan Hoque, Ashwin Balakrishna, Ellen R. Novoseller, Albert Wilcox, Daniel S. Brown, Ken Goldberg:
ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning. CoRR abs/2109.08273 (2021) - [i12]Letian Fu, Michael Danielczuk, Ashwin Balakrishna, Daniel S. Brown, Jeffrey Ichnowski, Eugen Solowjow, Ken Goldberg:
LEGS: Learning Efficient Grasp Sets for Exploratory Grasping. CoRR abs/2111.15002 (2021) - 2020
- [c17]Michael Danielczuk, Ashwin Balakrishna, Daniel S. Brown, Ken Goldberg:
Exploratory Grasping: Asymptotically Optimal Algorithms for Grasping Challenging Polyhedral Objects. CoRL 2020: 377-393 - [c16]Daniel S. Brown, Russell Coleman, Ravi Srinivasan, Scott Niekum:
Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences. ICML 2020: 1165-1177 - [c15]Daniel S. Brown, Scott Niekum, Marek Petrik:
Bayesian Robust Optimization for Imitation Learning. NeurIPS 2020 - [i11]Daniel S. Brown, Russell Coleman, Ravi Srinivasan, Scott Niekum:
Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences. CoRR abs/2002.09089 (2020) - [i10]Daniel S. Brown, Scott Niekum, Marek Petrik:
Bayesian Robust Optimization for Imitation Learning. CoRR abs/2007.12315 (2020) - [i9]Michael Danielczuk, Ashwin Balakrishna, Daniel S. Brown, Shivin Devgon, Ken Goldberg:
Exploratory Grasping: Asymptotically Optimal Algorithms for Grasping Challenging Polyhedral Objects. CoRR abs/2011.05632 (2020) - [i8]Daniel S. Brown, Jordan Schneider, Scott Niekum:
Value Alignment Verification. CoRR abs/2012.01557 (2020)
2010 – 2019
- 2019
- [c14]Daniel S. Brown, Scott Niekum:
Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications. AAAI 2019: 7749-7758 - [c13]Daniel S. Brown, Wonjoon Goo, Scott Niekum:
Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations. CoRL 2019: 330-359 - [c12]Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, Scott Niekum:
Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations. ICML 2019: 783-792 - [i7]Daniel S. Brown, Yuchen Cui, Scott Niekum:
Risk-Aware Active Inverse Reinforcement Learning. CoRR abs/1901.02161 (2019) - [i6]Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, Scott Niekum:
Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations. CoRR abs/1904.06387 (2019) - [i5]Daniel S. Brown, Wonjoon Goo, Scott Niekum:
Ranking-Based Reward Extrapolation without Rankings. CoRR abs/1907.03976 (2019) - [i4]Daniel S. Brown, Scott Niekum:
Deep Bayesian Reward Learning from Preferences. CoRR abs/1912.04472 (2019) - 2018
- [c11]Daniel S. Brown, Scott Niekum:
Efficient Probabilistic Performance Bounds for Inverse Reinforcement Learning. AAAI 2018: 2754-2762 - [c10]Daniel S. Brown, Yuchen Cui, Scott Niekum:
Risk-Aware Active Inverse Reinforcement Learning. CoRL 2018: 362-372 - [i3]Daniel S. Brown, Scott Niekum:
Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications. CoRR abs/1805.07687 (2018) - [i2]Yuqian Jiang, Nick Walker, Minkyu Kim, Nicolas Brissonneau, Daniel S. Brown, Justin W. Hart, Scott Niekum, Luis Sentis, Peter Stone:
LAAIR: A Layered Architecture for Autonomous Interactive Robots. CoRR abs/1811.03563 (2018) - 2017
- [j2]Daniel S. Brown, Jeffrey Hudack, Nathaniel Gemelli, Bikramjit Banerjee:
Exact and Heuristic Algorithms for Risk-Aware Stochastic Physical Search. Comput. Intell. 33(3): 524-553 (2017) - [c9]Daniel S. Brown, Scott Niekum:
Toward Probabilistic Safety Bounds for Robot Learning from Demonstration. AAAI Fall Symposia 2017: 10-18 - [i1]Daniel S. Brown, Scott Niekum:
Efficient Probabilistic Performance Bounds for Inverse Reinforcement Learning. CoRR abs/1707.00724 (2017) - 2016
- [j1]Daniel S. Brown, Michael A. Goodrich, Shin-Young Jung, Sean Kerman:
Two invariants of human-swarm interaction. J. Hum. Robot Interact. 5(1): 1-31 (2016) - [c8]Daniel S. Brown, Ryan Turner, Oliver Hennigh, Steven Loscalzo:
Discovery and Exploration of Novel Swarm Behaviors Given Limited Robot Capabilities. DARS 2016: 447-460 - [c7]Matthew Berger, Lee M. Seversky, Daniel S. Brown:
Classifying swarm behavior via compressive subspace learning. ICRA 2016: 5328-5335 - 2015
- [c6]Daniel S. Brown, Steven Loscalzo, Nathaniel Gemelli:
k-Agent Sufficiency for Multiagent Stochastic Physical Search Problems. ADT 2015: 171-186 - [c5]Jeffrey Hudack, Nathaniel Gemelli, Daniel S. Brown, Steven Loscalzo, Jae C. Oh:
Multiobjective Optimization for the Stochastic Physical Search Problem. IEA/AIE 2015: 212-221 - 2014
- [c4]Daniel S. Brown, Michael A. Goodrich:
Limited bandwidth recognition of collective behaviors in bio-inspired swarms. AAMAS 2014: 405-412 - [c3]Daniel S. Brown, Sean C. Kerman, Michael A. Goodrich:
Human-swarm interactions based on managing attractors. HRI 2014: 90-97 - [c2]Daniel S. Brown, Shin-Young Jun, Michael A. Goodrich:
Balancing human and inter-agent influences for shared control of bio-inspired collectives. SMC 2014: 4123-4128 - 2013
- [c1]Shin-Young Jun, Daniel S. Brown, Michael A. Goodrich:
Shaping Couzin-Like Torus Swarms through Coordinated Mediation. SMC 2013: 1834-1839
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-27 20:27 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint