My research focuses on robust planning under uncertainty motivated by long-horizon manipulation tasks such as assembly or rearrangement. I am also a part-time intern at the NVIDIA Seattle Robotics Lab where I work on contact-rich manipulation. Before MIT, I studied dialogue systems at McGill University's Reasoning and Learning Lab.
Object-Factored Models with Partially Observable State
Isaiah Brand*,
Michael Noseworthy*,
Sebastian Castro,
Nicholas Roy
NeurIPS 2021: Bayesian Deep Learning Workshop
Efficient adaptation for manipulating objects with non-visual parameters.
Active Learning of Abstract Plan Feasibility
Michael Noseworthy*,
Caris Moses*,
Isaiah Brand*,
Sebastian Castro,
Leslie Kaelbling,
Tomás Lozano-Pérez,
Nicholas Roy
RSS 2021
Efficient online learning of feasility models using ensembles of graph networks.
Visual Prediction of Priors for Articulated Object Interaction
Caris Moses*,
Michael Noseworthy*,
Leslie Kaelbling,
Tomás Lozano-Pérez,
Nicholas Roy
ICRA 2020
Efficient manipulation of articulated objects using visual priors to infer kinematic parameters.
Task-Conditioned Variational Autoencoders for Learning Movement Primitives
Michael Noseworthy,
Rohan Paul,
Subhro Roy,
Daehyung Park,
Nicholas Roy
CORL 2019
Learning interpretable movement primitives from demonstration.
Inferring Task Goals and Constraints using Bayesian Nonparametric Inverse Reinforcement Learning
Daehyung Park,
Michael Noseworthy,
Rohan Paul,
Subhro Roy,
Nicholas Roy
CORL 2019
Learning from demonstration in the presence of complex constraints.
Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses
Ryan Lowe*,
Michael Noseworthy*,
Iulian Vlad Serban,
Nicolas Angelard-Gontier,
Yoshua Bengio,
Joelle Pineau
ACL 2017
Automatic metric for dialogue model response evaluation.
How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation
Chia-Wei Liu*,
Ryan Lowe*,
Iulian Vlad Serban*,
Michael Noseworthy*,
Laurent Charlin,
Joelle Pineau
EMNLP 2017
A study of how common automatic metrics for evaluating dialogue responses correlate with human judgement.