Skip to content

Commit

Permalink
Update RL docs for better sharing / adding models (huggingface#1563)
Browse files Browse the repository at this point in the history
* init docs update

* style

* fix bad colab formatting, add pipeline comment

* update todo
  • Loading branch information
Nathan Lambert authored Dec 7, 2022
1 parent ca68ab3 commit bea7eb4
Show file tree
Hide file tree
Showing 6 changed files with 46 additions and 68 deletions.
5 changes: 3 additions & 2 deletions docs/source/using-diffusers/other-modalities.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ specific language governing permissions and limitations under the License.

Diffusers is in the process of expanding to modalities other than images.

Currently, one example is for [molecule conformation](https://www.nature.com/subjects/molecular-conformation#:~:text=Definition,to%20changes%20in%20their%20environment.) generation.
* Generate conformations in Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/geodiff_molecule_conformation.ipynb)
Example type | Colab | Pipeline |
:-------------------------:|:-------------------------:|:-------------------------:|
[Molecule conformation](https://www.nature.com/subjects/molecular-conformation#:~:text=Definition,to%20changes%20in%20their%20environment.) generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/geodiff_molecule_conformation.ipynb) |

More coming soon!
11 changes: 9 additions & 2 deletions docs/source/using-diffusers/rl.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,13 @@ specific language governing permissions and limitations under the License.
# Using Diffusers for reinforcement learning

Support for one RL model and related pipelines is included in the `experimental` source of diffusers.
More models and examples coming soon!

To try some of this in colab, please look at the following example:
* Model-based reinforcement learning on Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)
# Diffuser Value-guided Planning

You can run the model from [*Planning with Diffusion for Flexible Behavior Synthesis*](https://arxiv.org/abs/2205.09991) with Diffusers.
The script is located in the [RL Examples](https://github.com/huggingface/diffusers/tree/main/examples/rl) folder.

Or, run this example in Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb)

[[autodoc]] diffusers.experimental.ValueGuidedRLPipeline
11 changes: 7 additions & 4 deletions examples/rl/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
# Overview

These examples show how to run (Diffuser)[https://arxiv.org/abs/2205.09991] in Diffusers.
There are four scripts,
1. `run_diffuser_locomotion.py` to sample actions and run them in the environment,
2. and `run_diffuser_gen_trajectories.py` to just sample actions from the pre-trained diffusion model.
These examples show how to run [Diffuser](https://arxiv.org/abs/2205.09991) in Diffusers.
There are two ways to use the script, `run_diffuser_locomotion.py`.

The key option is a change of the variable `n_guide_steps`.
When `n_guide_steps=0`, the trajectories are sampled from the diffusion model, but not fine-tuned to maximize reward in the environment.
By default, `n_guide_steps=2` to match the original implementation.


You will need some RL specific requirements to run the examples:

Expand Down
57 changes: 0 additions & 57 deletions examples/rl/run_diffuser_gen_trajectories.py

This file was deleted.

4 changes: 3 additions & 1 deletion examples/rl/run_diffuser_locomotion.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
n_samples=64,
horizon=32,
num_inference_steps=20,
n_guide_steps=2,
n_guide_steps=2, # can set to 0 for faster sampling, does not use value network
scale_grad_by_std=True,
scale=0.1,
eta=0.0,
Expand Down Expand Up @@ -40,13 +40,15 @@
# execute action in environment
next_observation, reward, terminal, _ = env.step(denorm_actions)
score = env.get_normalized_score(total_reward)

# update return
total_reward += reward
total_score += score
print(
f"Step: {t}, Reward: {reward}, Total Reward: {total_reward}, Score: {score}, Total Score:"
f" {total_score}"
)

# save observations for rendering
rollout.append(next_observation.copy())

Expand Down
26 changes: 24 additions & 2 deletions src/diffusers/experimental/rl/value_guided_sampling.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,22 @@


class ValueGuidedRLPipeline(DiffusionPipeline):
r"""
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
Pipeline for sampling actions from a diffusion model trained to predict sequences of states.
Original implementation inspired by this repository: https://github.com/jannerm/diffuser.
Parameters:
value_function ([`UNet1DModel`]): A specialized UNet for fine-tuning trajectories base on reward.
unet ([`UNet1DModel`]): U-Net architecture to denoise the encoded trajectories.
scheduler ([`SchedulerMixin`]):
A scheduler to be used in combination with `unet` to denoise the encoded trajectories. Default for this
application is [`DDPMScheduler`].
env: An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models.
"""

def __init__(
self,
value_function: UNet1DModel,
Expand Down Expand Up @@ -78,21 +94,26 @@ def run_diffusion(self, x, conditions, n_guide_steps, scale):
for _ in range(n_guide_steps):
with torch.enable_grad():
x.requires_grad_()

# permute to match dimension for pre-trained models
y = self.value_function(x.permute(0, 2, 1), timesteps).sample
grad = torch.autograd.grad([y.sum()], [x])[0]

posterior_variance = self.scheduler._get_variance(i)
model_std = torch.exp(0.5 * posterior_variance)
grad = model_std * grad

grad[timesteps < 2] = 0
x = x.detach()
x = x + scale * grad
x = self.reset_x0(x, conditions, self.action_dim)

prev_x = self.unet(x.permute(0, 2, 1), timesteps).sample.permute(0, 2, 1)
# TODO: set prediction_type when instantiating the model

# TODO: verify deprecation of this kwarg
x = self.scheduler.step(prev_x, i, x, predict_epsilon=False)["prev_sample"]

# apply conditions to the trajectory
# apply conditions to the trajectory (set the initial state)
x = self.reset_x0(x, conditions, self.action_dim)
x = self.to_torch(x)
return x, y
Expand Down Expand Up @@ -126,5 +147,6 @@ def __call__(self, obs, batch_size=64, planning_horizon=32, n_guide_steps=2, sca
else:
# if we didn't run value guiding, select a random action
selected_index = np.random.randint(0, batch_size)

denorm_actions = denorm_actions[selected_index, 0]
return denorm_actions

0 comments on commit bea7eb4

Please sign in to comment.