Michael Noseworthy

I am a graduate student in the Robust Robotics Group at MIT.

My research focuses on robust planning under uncertainty motivated by long-horizon manipulation tasks such as assembly or rearrangement. I have interned at the NVIDIA Seattle Robotics Lab where I worked on contact-rich manipulation. Before MIT, I studied dialogue systems at McGill University's Reasoning and Learning Lab.

Email  /  Scholar  /  Twitter  /  Github

profile photo

Research

Planetary Gearbox
FORGE: Force-Guided Exploration for Robust Contact-Rich Manipulation under Uncertainty
Michael Noseworthy, Bingjie Tang, Bowen Wen, Ankur Handa, Chad Kessens, Nicholas Roy Dieter Fox, Fabio Ramos, Yashraj Narang, Iretiayo Akinola
CORL 2024 Workshop on Learning Robotic Assembly [Best Paper]

Sim-to-real transfer of force sensing for contact-rich assembly tasks.

Paper / Website
Panda Grasping
Amortized Inference for Efficient Grasp Model Adaptation
Michael Noseworthy*, Seiji Shaw*, Chad Kessens, Nicholas Roy
ICRA 2024

Adaptively grasping objects without unknown dynamics properties (e.g., mass distribution or frictional coefficients).

Paper
Insights towards Sim2Real Contact-Rich Manipulation
Michael Noseworthy, Iretiayo Akinola, Yashraj Narang, Fabio Ramos, Lucas Manuelli, Ankur Handa, Dieter Fox
NeurIPS 2022: Robot Learning Workshop

Training policies to solve contact-rich manipulation tasks with noisy pose estimates.

Paper
Learning Phases
Object-Factored Models with Partially Observable State
Isaiah Brand*, Michael Noseworthy*, Sebastian Castro, Nicholas Roy
NeurIPS 2021: Bayesian Deep Learning Workshop

Efficient adaptation for manipulating objects with non-visual parameters.

Paper
Active Learning of Abstract Plan Feasibility
Michael Noseworthy*, Caris Moses*, Isaiah Brand*, Sebastian Castro, Leslie Kaelbling, Tomás Lozano-Pérez, Nicholas Roy
RSS 2021

Efficient online learning of feasility models using ensembles of graph networks.

Paper / Talk
Visual Prediction of Priors for Articulated Object Interaction
Caris Moses*, Michael Noseworthy*, Leslie Kaelbling, Tomás Lozano-Pérez, Nicholas Roy
ICRA 2020

Efficient manipulation of articulated objects using visual priors to infer kinematic parameters.

Paper / Talk / Code / Website
Pour Action Demonstration
Task-Conditioned Variational Autoencoders for Learning Movement Primitives
Michael Noseworthy, Rohan Paul, Subhro Roy, Daehyung Park, Nicholas Roy
CORL 2019

Learning interpretable movement primitives from demonstration.

Paper
Robot Pipe
Inferring Task Goals and Constraints using Bayesian Nonparametric Inverse Reinforcement Learning
Daehyung Park, Michael Noseworthy, Rohan Paul, Subhro Roy, Nicholas Roy
CORL 2019

Learning from demonstration in the presence of complex constraints.

Paper
Interaction example
Leveraging Past References for Robust Language Grounding
Subhro Roy*, Michael Noseworthy*, Rohan Paul, Daehyung Park, Nicholas Roy
CoNLL 2019

Natural language grounding in situated and temporally extended contexts.

Paper
Score correlation
Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses
Ryan Lowe*, Michael Noseworthy*, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, Joelle Pineau
ACL 2017 [Outstanding Paper]

Automatic metric for dialogue model response evaluation.

Paper / Code / Talk
TSNE Plot
Predicting Success in Goal-Driven Human-Human Dialogues
Michael Noseworthy, Jackie Chi Kit Cheung, Joelle Pineau
SIGDIAL 2017

Automatic success prediction for task-driven dialogue systems.

Paper
Score correlation
How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation
Chia-Wei Liu*, Ryan Lowe*, Iulian Vlad Serban*, Michael Noseworthy*, Laurent Charlin, Joelle Pineau
EMNLP 2017

A study of how common automatic metrics for evaluating dialogue responses correlate with human judgement.

Paper / Talk

Miscellanea

Inclusion@CoRL Organizer, CoRL 2020
Queer in AI Organizer, RSS 2021
Queer in AI Organizer, CoRL 2021

Website template from Jon Barron.