Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU docs #2510

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

GPU docs #2510

wants to merge 2 commits into from

Conversation

mcabbott
Copy link
Member

@mcabbott mcabbott commented Oct 30, 2024

This re-writes the start of the GPU documentation page. It aims to use simpler examples, and stress that model |> cu just works, before talking about more exotic non-CUDA packages, and the automatic model |> gpu.

Rendered MD: https://github.com/FluxML/Flux.jl/blob/gpu_docs/docs/src/guide/gpu.md

Current docs: http://fluxml.ai/Flux.jl/stable/guide/gpu/

Copy link
Contributor

github-actions bot commented Oct 30, 2024

Once the build has completed, you can preview any updated documentation at this URL: https://fluxml.ai/Flux.jl/previews/PR2510/ in ~20 minutes

In particular, this page: https://fluxml.ai/Flux.jl/previews/PR2510/guide/gpu/

Copy link

codecov bot commented Oct 30, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 33.20%. Comparing base (32db5d4) to head (d9c03b9).
Report is 2 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #2510   +/-   ##
=======================================
  Coverage   33.20%   33.20%           
=======================================
  Files          31       31           
  Lines        1843     1843           
=======================================
  Hits          612      612           
  Misses       1231     1231           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mcabbott mcabbott added this to the v0.15 milestone Nov 6, 2024
@CarloLucibello CarloLucibello removed this from the v0.15 milestone Nov 16, 2024
@CarloLucibello
Copy link
Member

removing the milestone as this shouldn't be blocking

@mcabbott
Copy link
Member Author

The point is, in part, to think through whatever interface we're adopting by trying to explain it clearly. If it's a mess then 0.15 is when to fix it. That's why it was on the milestone.

@mcabbott mcabbott added this to the v0.15 milestone Nov 24, 2024
@CarloLucibello
Copy link
Member

Can we get this done so that it doesn't delay the 0.15 release then? I would like to tag in a few days

@mcabbott
Copy link
Member Author

Sure. What do you think this lacks?

@CarloLucibello CarloLucibello marked this pull request as ready for review November 25, 2024 06:16
Comment on lines +70 to +71
!!! compat "Flux ≤ 0.13"
Old versions of Flux automatically loaded CUDA.jl to provide GPU support. Starting from Flux v0.14, it has to be loaded separately. Julia's [package extensions](https://pkgdocs.julialang.org/v1/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) allow Flux to automatically load some GPU-specific code when needed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
!!! compat "Flux ≤ 0.13"
Old versions of Flux automatically loaded CUDA.jl to provide GPU support. Starting from Flux v0.14, it has to be loaded separately. Julia's [package extensions](https://pkgdocs.julialang.org/v1/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) allow Flux to automatically load some GPU-specific code when needed.

```


## Manually selecting devices

I thought there was a whole `Flux.gpu_backend!` and Preferences.jl story we had to tell??
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gpu_backend! affects the return from gpu_device like this:

  • If no GPU is available, it returns a CPUDevice object.
  • If a LocalPreferences file is present, then the backend specified in the file is used. If the trigger package corresponding to the device is not loaded, then a warning is displayed.
  • If no LocalPreferences file is present, then the first working GPU with loaded trigger package is used.

This is already described in the docstring of gpu_device. I think we shouldn't mention gpu_backend! at all in this guide because it is useless in practice.

@CarloLucibello
Copy link
Member

Maybe we should put a TLDR at the top just saying something like

using CUDA # or AMDGPU or Metal
device = gpu_device()
model = model |> device
for epoch in 1:num_epochs
    for (x, y) in dataloader
        x, y = device((x, y))
        ... compute gradients and update model ...
    end
end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants