DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models

1MIT 2Tsinghua University 3MIT-IBM Watson AI Lab 4Harvard 5UMass Amherst
indicates equal advising
NeurIPS 2023 (Oral)

TL;DR: Design robots with generative AI.

DiffuseBot aims to augment diffusion models with physical utility and designs for high-level functional specifications including robot geometry, material stiffness, and actuator placement.

Abstract

Nature evolves creatures with a high complexity of morphological and behavioral intelligence, meanwhile computational methods lag in approaching that diversity and efficacy. Co-optimization of artificial creatures' morphology and control in silico shows promise for applications in physical soft robotics and virtual character creation; such approaches, however, require developing new learning algorithms that can reason about function atop pure structure. In this paper, we present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks. DiffuseBot bridges the gap between virtually generated content and physical utility by (i) augmenting the diffusion process with a physical dynamical simulation which provides a certificate of performance, and (ii) introducing a co-design procedure that jointly optimizes physical design and control by leveraging information about physical sensitivities from differentiable simulation. We showcase a range of simulated and fabricated robots along with their capabilities.

Video

How Does DiffuseBot Work?

The DiffuseBot framework consists of three modules:
(i) Robotizing, which converts diffusion samples (e.g., surface point cloud) into physically simulatable soft robot designs with high-level functional specifications including robot geometry, material stiffness, and actuator placement. This step endows the prerequisite of physical utility to a generation from the diffusion model.
(ii) Embedding Optimization, which iteratively generate new robot designs to be evaluated for training the conditional embedding. This involves an online learning scheme that uses the embedding-conditioned diffusion model to generate a dataset to train itself with the sampling distribution biased toward high task performance.
(iii) Diffusion as Co-design, which guides the sampling process with co-design gradients from differentiable simulation. We reformulate the diffusion sampling process into a join design and control optimization: in each diffusion step, intermediate sample is updated via Markov Chain Monte Carlo from design gradient; along with a co-optimizing controller that adapts to the current robot design.

The physics-based simulation provides is augmented with the diffusion models via:
Arrow (A): evaluating robot design and provide task performance to guide training data distribution.
Arrow (B): gradients of design and control from differentiable physics toward improved physical utility.

Toward Physical Utility

Robot Evolution Through Diffusion

Balancing

Landing

Crawling

Hurdling

Gripping

Box Moving

Robot Evolution Through Embedding Optimization

Balancing

Landing

Crawling

Hurdling

Gripping

Box Moving

Design With Human Feedback Via Language

+ Human Feedback: "A Ball"
Task: Balancing
+ Human Feedback: "A Round Disk"
Task: Crawling
+ Human Feedback: "Two Wings"
Task: Hurdling
+ Human Feedback: "Thick"
Task: Box Moving
+ Human Feedback: "A Unicorn"
Task: Crawling

Physical Robot Fabrication

Simulation

Physical World

BibTeX

@inproceedings{
  wang2023diffusebot,
  title={DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models},
  author={Tsun-Hsuan Wang and Juntian Zheng and Pingchuan Ma and Yilun Du and Byungchul Kim and Andrew Everett Spielberg and Joshua B. Tenenbaum and Chuang Gan and Daniela Rus},
  booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
  year={2023},
  url={https://openreview.net/forum?id=1zo4iioUEs}
}