Skip to content

Commit

Permalink
Automated tutorials push
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Jan 22, 2025
1 parent e4aef6b commit 7abbd83
Show file tree
Hide file tree
Showing 186 changed files with 12,971 additions and 12,687 deletions.
Binary file modified _images/sphx_glr_char_rnn_classification_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_char_rnn_classification_tutorial_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_coding_ddpg_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_dqn_with_rnn_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_neural_style_tutorial_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_003.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_pinmem_nonblock_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_ppo_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_q_learning_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_semi_structured_sparse_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_semi_structured_sparse_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_spatial_transformer_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_torchvision_tutorial_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
42 changes: 21 additions & 21 deletions _sources/advanced/coding_ddpg.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1634,26 +1634,26 @@ modules we need.
0%| | 0/10000 [00:00<?, ?it/s]
8%|8 | 800/10000 [00:00<00:05, 1649.92it/s]
16%|#6 | 1600/10000 [00:02<00:17, 481.22it/s]
24%|##4 | 2400/10000 [00:03<00:11, 679.82it/s]
32%|###2 | 3200/10000 [00:04<00:07, 854.25it/s]
40%|#### | 4000/10000 [00:04<00:06, 991.60it/s]
48%|####8 | 4800/10000 [00:05<00:04, 1103.97it/s]
56%|#####6 | 5600/10000 [00:05<00:03, 1186.81it/s]
reward: -2.36 (r0 = -2.46), reward eval: reward: -0.00, reward normalized=-2.17/6.02, grad norm= 70.85, loss_value= 259.91, loss_actor= 13.11, target value: -12.55: 56%|#####6 | 5600/10000 [00:06<00:03, 1186.81it/s]
reward: -2.36 (r0 = -2.46), reward eval: reward: -0.00, reward normalized=-2.17/6.02, grad norm= 70.85, loss_value= 259.91, loss_actor= 13.11, target value: -12.55: 64%|######4 | 6400/10000 [00:07<00:04, 828.88it/s]
reward: -0.11 (r0 = -2.46), reward eval: reward: -0.00, reward normalized=-2.21/5.41, grad norm= 48.15, loss_value= 220.38, loss_actor= 13.06, target value: -14.67: 64%|######4 | 6400/10000 [00:08<00:04, 828.88it/s]
reward: -0.11 (r0 = -2.46), reward eval: reward: -0.00, reward normalized=-2.21/5.41, grad norm= 48.15, loss_value= 220.38, loss_actor= 13.06, target value: -14.67: 72%|#######2 | 7200/10000 [00:09<00:04, 680.34it/s]
reward: -2.34 (r0 = -2.46), reward eval: reward: -0.00, reward normalized=-2.42/5.53, grad norm= 124.03, loss_value= 233.88, loss_actor= 14.89, target value: -15.42: 72%|#######2 | 7200/10000 [00:09<00:04, 680.34it/s]
reward: -2.34 (r0 = -2.46), reward eval: reward: -0.00, reward normalized=-2.42/5.53, grad norm= 124.03, loss_value= 233.88, loss_actor= 14.89, target value: -15.42: 80%|######## | 8000/10000 [00:10<00:03, 600.90it/s]
reward: -4.42 (r0 = -2.46), reward eval: reward: -0.00, reward normalized=-2.45/4.73, grad norm= 72.59, loss_value= 179.06, loss_actor= 14.14, target value: -16.28: 80%|######## | 8000/10000 [00:11<00:03, 600.90it/s]
reward: -4.42 (r0 = -2.46), reward eval: reward: -0.00, reward normalized=-2.45/4.73, grad norm= 72.59, loss_value= 179.06, loss_actor= 14.14, target value: -16.28: 88%|########8 | 8800/10000 [00:12<00:02, 560.50it/s]
reward: -4.65 (r0 = -2.46), reward eval: reward: -4.50, reward normalized=-2.63/5.20, grad norm= 170.41, loss_value= 308.90, loss_actor= 11.66, target value: -17.58: 88%|########8 | 8800/10000 [00:15<00:02, 560.50it/s]
reward: -4.65 (r0 = -2.46), reward eval: reward: -4.50, reward normalized=-2.63/5.20, grad norm= 170.41, loss_value= 308.90, loss_actor= 11.66, target value: -17.58: 96%|#########6| 9600/10000 [00:15<00:01, 391.02it/s]
reward: -3.49 (r0 = -2.46), reward eval: reward: -4.50, reward normalized=-2.54/4.64, grad norm= 94.25, loss_value= 200.86, loss_actor= 13.08, target value: -17.75: 96%|#########6| 9600/10000 [00:16<00:01, 391.02it/s]
reward: -3.49 (r0 = -2.46), reward eval: reward: -4.50, reward normalized=-2.54/4.64, grad norm= 94.25, loss_value= 200.86, loss_actor= 13.08, target value: -17.75: : 10400it [00:18, 355.20it/s]
reward: -3.71 (r0 = -2.46), reward eval: reward: -4.50, reward normalized=-2.84/4.12, grad norm= 143.58, loss_value= 128.55, loss_actor= 17.95, target value: -20.42: : 10400it [00:19, 355.20it/s]
8%|8 | 800/10000 [00:00<00:05, 1648.36it/s]
16%|#6 | 1600/10000 [00:02<00:17, 486.77it/s]
24%|##4 | 2400/10000 [00:03<00:10, 691.82it/s]
32%|###2 | 3200/10000 [00:04<00:07, 881.23it/s]
40%|#### | 4000/10000 [00:04<00:05, 1035.67it/s]
48%|####8 | 4800/10000 [00:05<00:04, 1163.73it/s]
56%|#####6 | 5600/10000 [00:05<00:03, 1259.53it/s]
reward: -2.42 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.79/6.18, grad norm= 44.37, loss_value= 264.02, loss_actor= 15.37, target value: -16.89: 56%|#####6 | 5600/10000 [00:06<00:03, 1259.53it/s]
reward: -2.42 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.79/6.18, grad norm= 44.37, loss_value= 264.02, loss_actor= 15.37, target value: -16.89: 64%|######4 | 6400/10000 [00:07<00:04, 869.73it/s]
reward: -0.12 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.19/5.28, grad norm= 32.93, loss_value= 177.57, loss_actor= 11.78, target value: -13.67: 64%|######4 | 6400/10000 [00:07<00:04, 869.73it/s]
reward: -0.12 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.19/5.28, grad norm= 32.93, loss_value= 177.57, loss_actor= 11.78, target value: -13.67: 72%|#######2 | 7200/10000 [00:08<00:04, 699.25it/s]
reward: -3.01 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.92/6.06, grad norm= 348.61, loss_value= 345.16, loss_actor= 14.69, target value: -19.19: 72%|#######2 | 7200/10000 [00:09<00:04, 699.25it/s]
reward: -3.01 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.92/6.06, grad norm= 348.61, loss_value= 345.16, loss_actor= 14.69, target value: -19.19: 80%|######## | 8000/10000 [00:10<00:03, 621.57it/s]
reward: -4.36 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.66/4.94, grad norm= 124.14, loss_value= 198.18, loss_actor= 12.94, target value: -17.62: 80%|######## | 8000/10000 [00:11<00:03, 621.57it/s]
reward: -4.36 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.66/4.94, grad norm= 124.14, loss_value= 198.18, loss_actor= 12.94, target value: -17.62: 88%|########8 | 8800/10000 [00:11<00:02, 579.06it/s]
reward: -3.76 (r0 = -2.48), reward eval: reward: -5.85, reward normalized=-2.16/4.86, grad norm= 76.93, loss_value= 169.87, loss_actor= 14.40, target value: -15.06: 88%|########8 | 8800/10000 [00:14<00:02, 579.06it/s]
reward: -3.76 (r0 = -2.48), reward eval: reward: -5.85, reward normalized=-2.16/4.86, grad norm= 76.93, loss_value= 169.87, loss_actor= 14.40, target value: -15.06: 96%|#########6| 9600/10000 [00:15<00:01, 399.67it/s]
reward: -4.49 (r0 = -2.48), reward eval: reward: -5.85, reward normalized=-2.41/5.07, grad norm= 69.34, loss_value= 210.28, loss_actor= 16.28, target value: -17.91: 96%|#########6| 9600/10000 [00:16<00:01, 399.67it/s]
reward: -4.49 (r0 = -2.48), reward eval: reward: -5.85, reward normalized=-2.41/5.07, grad norm= 69.34, loss_value= 210.28, loss_actor= 16.28, target value: -17.91: : 10400it [00:18, 366.73it/s]
reward: -4.88 (r0 = -2.48), reward eval: reward: -5.85, reward normalized=-3.59/4.17, grad norm= 147.86, loss_value= 198.34, loss_actor= 23.19, target value: -25.45: : 10400it [00:18, 366.73it/s]
Expand Down Expand Up @@ -1723,7 +1723,7 @@ To iterate further on this loss module we might consider:

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 29.589 seconds)
**Total running time of the script:** ( 0 minutes 28.592 seconds)


.. _sphx_glr_download_advanced_coding_ddpg.py:
Expand Down
6 changes: 3 additions & 3 deletions _sources/advanced/dynamic_quantization_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -517,9 +517,9 @@ models run single threaded.
.. code-block:: none
loss: 5.167
elapsed time (seconds): 207.4
elapsed time (seconds): 204.1
loss: 5.168
elapsed time (seconds): 119.2
elapsed time (seconds): 116.5
Expand All @@ -541,7 +541,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 5 minutes 37.532 seconds)
**Total running time of the script:** ( 5 minutes 31.304 seconds)


.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:
Expand Down
68 changes: 33 additions & 35 deletions _sources/advanced/neural_style_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -410,34 +410,32 @@ network to evaluation mode using ``.eval()``.
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
0%| | 0.00/548M [00:00<?, ?B/s]
2%|1 | 9.38M/548M [00:00<00:05, 97.7MB/s]
4%|4 | 23.4M/548M [00:00<00:04, 126MB/s]
8%|8 | 43.9M/548M [00:00<00:03, 166MB/s]
12%|#1 | 64.5M/548M [00:00<00:02, 186MB/s]
16%|#5 | 85.4M/548M [00:00<00:02, 197MB/s]
19%|#9 | 106M/548M [00:00<00:02, 203MB/s]
23%|##3 | 126M/548M [00:00<00:02, 207MB/s]
27%|##6 | 147M/548M [00:00<00:02, 209MB/s]
31%|### | 168M/548M [00:00<00:01, 212MB/s]
34%|###4 | 188M/548M [00:01<00:01, 214MB/s]
38%|###8 | 209M/548M [00:01<00:01, 215MB/s]
42%|####1 | 230M/548M [00:01<00:01, 215MB/s]
46%|####5 | 251M/548M [00:01<00:01, 216MB/s]
50%|####9 | 272M/548M [00:01<00:01, 217MB/s]
53%|#####3 | 292M/548M [00:01<00:01, 217MB/s]
57%|#####7 | 313M/548M [00:01<00:01, 217MB/s]
61%|###### | 334M/548M [00:01<00:01, 217MB/s]
65%|######4 | 355M/548M [00:01<00:00, 217MB/s]
69%|######8 | 376M/548M [00:01<00:00, 218MB/s]
72%|#######2 | 397M/548M [00:02<00:00, 218MB/s]
76%|#######6 | 418M/548M [00:02<00:00, 218MB/s]
80%|#######9 | 438M/548M [00:02<00:00, 218MB/s]
84%|########3 | 459M/548M [00:02<00:00, 218MB/s]
88%|########7 | 480M/548M [00:02<00:00, 218MB/s]
91%|#########1| 501M/548M [00:02<00:00, 218MB/s]
95%|#########5| 522M/548M [00:02<00:00, 218MB/s]
99%|#########9| 543M/548M [00:02<00:00, 218MB/s]
100%|##########| 548M/548M [00:02<00:00, 210MB/s]
4%|3 | 20.6M/548M [00:00<00:02, 216MB/s]
8%|7 | 41.9M/548M [00:00<00:02, 220MB/s]
12%|#1 | 63.1M/548M [00:00<00:02, 221MB/s]
15%|#5 | 84.4M/548M [00:00<00:02, 221MB/s]
19%|#9 | 106M/548M [00:00<00:02, 222MB/s]
23%|##3 | 127M/548M [00:00<00:01, 222MB/s]
27%|##7 | 148M/548M [00:00<00:01, 223MB/s]
31%|### | 170M/548M [00:00<00:01, 223MB/s]
35%|###4 | 191M/548M [00:00<00:01, 223MB/s]
39%|###8 | 212M/548M [00:01<00:01, 223MB/s]
43%|####2 | 234M/548M [00:01<00:01, 223MB/s]
47%|####6 | 255M/548M [00:01<00:01, 223MB/s]
50%|##### | 276M/548M [00:01<00:01, 223MB/s]
54%|#####4 | 298M/548M [00:01<00:01, 223MB/s]
58%|#####8 | 319M/548M [00:01<00:01, 223MB/s]
62%|######2 | 340M/548M [00:01<00:00, 223MB/s]
66%|######5 | 362M/548M [00:01<00:00, 223MB/s]
70%|######9 | 383M/548M [00:01<00:00, 223MB/s]
74%|#######3 | 404M/548M [00:01<00:00, 224MB/s]
78%|#######7 | 426M/548M [00:02<00:00, 223MB/s]
82%|########1 | 447M/548M [00:02<00:00, 224MB/s]
85%|########5 | 468M/548M [00:02<00:00, 224MB/s]
89%|########9 | 490M/548M [00:02<00:00, 224MB/s]
93%|#########3| 511M/548M [00:02<00:00, 223MB/s]
97%|#########7| 533M/548M [00:02<00:00, 224MB/s]
100%|##########| 548M/548M [00:02<00:00, 223MB/s]
Expand Down Expand Up @@ -758,22 +756,22 @@ Finally, we can run the algorithm.
Optimizing..
run [50]:
Style Loss : 5.491051 Content Loss: 4.111195
Style Loss : 4.042071 Content Loss: 4.071409
run [100]:
Style Loss : 1.128039 Content Loss: 3.038853
Style Loss : 1.110853 Content Loss: 3.019152
run [150]:
Style Loss : 0.718506 Content Loss: 2.659447
Style Loss : 0.711853 Content Loss: 2.647382
run [200]:
Style Loss : 0.482143 Content Loss: 2.494146
Style Loss : 0.475776 Content Loss: 2.486519
run [250]:
Style Loss : 0.352943 Content Loss: 2.408322
Style Loss : 0.344838 Content Loss: 2.401526
run [300]:
Style Loss : 0.270402 Content Loss: 2.352659
Style Loss : 0.261330 Content Loss: 2.346556
Expand All @@ -782,7 +780,7 @@ Finally, we can run the algorithm.
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 38.614 seconds)
**Total running time of the script:** ( 0 minutes 38.513 seconds)


.. _sphx_glr_download_advanced_neural_style_tutorial.py:
Expand Down
2 changes: 1 addition & 1 deletion _sources/advanced/numpy_extensions_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 0.627 seconds)
**Total running time of the script:** ( 0 minutes 0.616 seconds)


.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:
Expand Down
Loading

0 comments on commit 7abbd83

Please sign in to comment.