From b722df61248795a0601ad00eb5e311182a22b799 Mon Sep 17 00:00:00 2001 From: ToucheSir Date: Thu, 1 Feb 2024 15:13:45 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20@=20FluxML/f?= =?UTF-8?q?luxml.github.io@d99ea44951fbe4fa294719f6307b3dfa474c0b15=20?= =?UTF-8?q?=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- previews/PR166/Manifest.toml | 4 +-- previews/PR166/gsoc/index.html | 2 +- previews/PR166/sitemap.xml | 52 +++++++++++++++++----------------- 3 files changed, 29 insertions(+), 29 deletions(-) diff --git a/previews/PR166/Manifest.toml b/previews/PR166/Manifest.toml index accd492..59f123c 100644 --- a/previews/PR166/Manifest.toml +++ b/previews/PR166/Manifest.toml @@ -199,9 +199,9 @@ version = "1.4.1" [[deps.OpenSSL_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "cc6e1927ac521b659af340e0ca45828a3ffc748f" +git-tree-sha1 = "60e3045590bd104a16fefb12836c00c0ef8c7f8c" uuid = "458c3c95-2e84-50aa-8efc-19380b2a3a95" -version = "3.0.12+0" +version = "3.0.13+0" [[deps.OrderedCollections]] git-tree-sha1 = "dfdf5519f235516220579f949664f1bf44e741c5" diff --git a/previews/PR166/gsoc/index.html b/previews/PR166/gsoc/index.html index 32bcff5..f212720 100644 --- a/previews/PR166/gsoc/index.html +++ b/previews/PR166/gsoc/index.html @@ -1 +1 @@ - Flux – Elegant ML

FluxML Projects - Summer of Code

Flux usually takes part in Google Summer of Code as a NumFocus organization. We follow the same rules and application guidelines as Julia, so please check there for more information on applying. Below are a set of ideas for potential projects (though you are welcome to explore anything you are interested in). Please note the year by the ideas list below. Project ideas from a previous year will not always carry over to a new year.

Flux projects are typically very competitive; we encourage you to get started early, as successful contributors typically have early PRs or working prototypes as part of the application. It is a good idea to simply start contributing via issue discussion and PRs and let a project grow from there; you can take a look at this list of issues for some starter contributions. Please see the contributing guide for help first.

FluxML GSoC 2024 Ideas List

Writing Julia-native kernels for common NN operations

Implement optimized kernels for common neural network operations for which we don't already have Julia-native implementations. This project will require experience with GPU kernel writing and performance optimizations.

Difficulty. Hard. Duration. 350 hours

Description

Many ML frameworks are making the move away from vendor-specific libraries (like CUBLAS, CUDNN, etc.) towards more generic, JIT-compiled implementations of ML-related kernels, like BLAS, softmax, ReLU, etc. The reasons for this move are many-fold:

  • Vendor-provided libraries often only work on that vendor's hardware and software

  • These libraries only support certain element types, tensor shapes/sizes, and limited array view/stride/transpose support

  • These libraries often expect to be executed from the host, without a device-side launchable equivalent

  • These libraries have unreliable build systems or are binary blobs

Improving this state of affairs for Flux will involve using Julia's existing GPU and compute kernel libraries (e.g KernelAbstractions.jl) to implement various accelerated, cross-vendor routines. These kernels should be both composable and performance competitive with Flux's current generic code paths. Examples of routines specifically useful for implementing Neural Networks include:

  • GEMM and GEMV

  • Softmax

  • Batchnorm and Layernorm

  • ReLU

  • Convolution/correlation

The ideal candidate should have experience with what operations are used in popular ML models and how they are commonly implemented on GPU. This includes experience writing and benchmarking high performance GPU kernels. Because kernels will be required for both training and inference, an understanding of automatic differentiation (AD) is also highly recommended.

Mentors. Julian Samaroo, Kyle Daruwalla, Brian Chen

Prerequisites

  • Julia language fluency is essential.

  • Experience with low-level GPU kernel programming is strongly recommended.

  • Experience with common primitive machine learning ops (forwards and backwards passes) and their interaction is recommended.

  • Familiarity with existing prior art such as tiny-cuda-nn is preferred.

Your contributions

  • A new package containing the optimized kernels and any supporting code for integration into Flux/Flux's operation library NNlib.jl.

  • Tests on CI and a simple benchmark harness for the new NN kernel library.

  • A proof-of-concept example showing the kernels being used with kernel fusion on device (GPU).

Creating runnable training and testing workflows for computer vision models

Write a suite of scripts that demonstrate how to use Metalhead.jl models for a variety of computer vision tasks and allow for ongoing verification of model correctness.

Difficulty. Moderate. Duration. 350 hours

Description

Metalhead.jl is Flux's computer vision library and contains a wide range of models along with pre-trained weights. However, it currently lacks a set of examples showcasing how to use the library in complete computer vision workflows. This includes aspects such as integrating data augmentation, manipulating hyperparameters, tracking metrics and evaluating trained models. Simultaneously, Metalhead does not have a comprehensive set of end-to-end tests to ensure all models can be trained to convergence and to catch less obvious performance or correctness regressions. This project will help fill both needs by creating a set of self-contained, runnable scripts which exercise the functionality of Metalhead models across a number of tasks. The choice of models and tasks may vary, but the top priority will be commonly-used ones such as ResNet and image classification.

The ideal candidate should have practical experience with training deep learning models for computer vision tasks, as well as sufficient familiarity with Julia to work independently with complex libraries (e.g. Flux) on a medium-sized codebase. Direct experience using Metalhead.jl is not required but highly recommended.

Mentors. Brian Chen, Kyle Daruwalla

Prerequisites

  • Julia language fluency is essential.

  • Github CI and experience with GH Actions is strongly suggested.

  • Experience with more than one ML task (e.g. image classification, autoregressive language modeling, etc.).

  • Familiarity with prior art is preferred:

Your contributions

  • A new FluxML package, FluxBenchmarks.jl, that will perform configurable benchmarking across our ML stack.

  • Github Actions integration for FluxBenchmarks.jl to invoke the tool from PRs.

  • A benchmarking suite that will build your experience with different types of ML models and operations across the stack.

\ No newline at end of file + Flux – Elegant ML

FluxML Projects - Summer of Code

Flux usually takes part in Google Summer of Code as a NumFocus organization. We follow the same rules and application guidelines as Julia, so please check there for more information on applying. Below are a set of ideas for potential projects (though you are welcome to explore anything you are interested in). Please note the year by the ideas list below. Project ideas from a previous year will not always carry over to a new year.

Flux projects are typically very competitive; we encourage you to get started early, as successful contributors typically have early PRs or working prototypes as part of the application. It is a good idea to simply start contributing via issue discussion and PRs and let a project grow from there; you can take a look at this list of issues for some starter contributions. Please see the contributing guide for help first.

FluxML GSoC 2024 Ideas List

Writing Julia-native kernels for common NN operations

Implement optimized kernels for common neural network operations for which we don't already have Julia-native implementations. This project will require experience with GPU kernel writing and performance optimizations.

Difficulty. Hard. Duration. 350 hours

Description

Many ML frameworks are making the move away from vendor-specific libraries (like CUBLAS, CUDNN, etc.) towards more generic, JIT-compiled implementations of ML-related kernels, like BLAS, softmax, ReLU, etc. The reasons for this move are many-fold:

  • Vendor-provided libraries often only work on that vendor's hardware and software

  • These libraries only support certain element types, tensor shapes/sizes, and limited array view/stride/transpose support

  • These libraries often expect to be executed from the host, without a device-side launchable equivalent

  • These libraries have unreliable build systems or are binary blobs

Improving this state of affairs for Flux will involve using Julia's existing GPU and compute kernel libraries (e.g KernelAbstractions.jl) to implement various accelerated, cross-vendor routines. These kernels should be both composable and performance competitive with Flux's current generic code paths. Examples of routines specifically useful for implementing Neural Networks include:

  • GEMM and GEMV

  • Softmax

  • Batchnorm and Layernorm

  • ReLU

  • Convolution/correlation

The ideal candidate should have experience with what operations are used in popular ML models and how they are commonly implemented on GPU. This includes experience writing and benchmarking high performance GPU kernels. Because kernels will be required for both training and inference, an understanding of automatic differentiation (AD) is also highly recommended.

Mentors. Julian Samaroo, Kyle Daruwalla, Brian Chen

Prerequisites

  • Julia language fluency is essential.

  • Experience with low-level GPU kernel programming is strongly recommended.

  • Experience with common primitive machine learning ops (forwards and backwards passes) and their interaction is recommended.

  • Familiarity with existing prior art such as tiny-cuda-nn is preferred.

Your contributions

  • A new package containing the optimized kernels and any supporting code for integration into Flux/Flux's operation library NNlib.jl.

  • Tests on CI and a simple benchmark harness for the new NN kernel library.

  • A proof-of-concept example showing the kernels being used with kernel fusion on device (GPU).

Creating runnable training and testing workflows for computer vision models

Write a suite of scripts that demonstrate how to use Metalhead.jl models for a variety of computer vision tasks and allow for ongoing verification of model correctness.

Difficulty. Moderate. Duration. 350 hours

Description

Metalhead.jl is Flux's computer vision library and contains a wide range of models along with pre-trained weights. However, it currently lacks a set of examples showcasing how to use the library in complete computer vision workflows. This includes aspects such as integrating data augmentation, manipulating hyperparameters, tracking metrics and evaluating trained models. Simultaneously, Metalhead does not have a comprehensive set of end-to-end tests to ensure all models can be trained to convergence and to catch less obvious performance or correctness regressions. This project will help fill both needs by creating a set of self-contained, runnable scripts which exercise the functionality of Metalhead models across a number of tasks. The choice of models and tasks may vary, but the top priority will be commonly-used ones such as ResNet and image classification.

The ideal candidate should have practical experience with training deep learning models for computer vision tasks, as well as sufficient familiarity with Julia to work independently with complex libraries (e.g. Flux) on a medium-sized codebase. Direct experience using Metalhead.jl is not required but highly recommended.

Mentors. Brian Chen, Kyle Daruwalla, Abhirath Anand

Prerequisites

  • Julia language fluency is essential.

  • Github CI and experience with GH Actions is strongly suggested.

  • Experience with more than one ML task (e.g. image classification, autoregressive language modeling, etc.).

  • Familiarity with prior art is preferred:

Your contributions

  • A new FluxML package, FluxBenchmarks.jl, that will perform configurable benchmarking across our ML stack.

  • Github Actions integration for FluxBenchmarks.jl to invoke the tool from PRs.

  • A benchmarking suite that will build your experience with different types of ML models and operations across the stack.

\ No newline at end of file diff --git a/previews/PR166/sitemap.xml b/previews/PR166/sitemap.xml index 898d4d2..a295278 100644 --- a/previews/PR166/sitemap.xml +++ b/previews/PR166/sitemap.xml @@ -3,157 +3,157 @@ fluxml.ai/blogposts/2017-12-06-ml-pl/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/blogposts/2019-09-11-simulating-the-motion-of-charges/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/governance/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/blogposts/2019-05-30-JSoC-Cohort/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/getting_started/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/gsod/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/blogposts/2020-06-29-acclerating-flux-torch/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/tutorialposts/2021-10-14-vanilla-gan/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/gsoc/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/tutorials/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/tutorialposts/2020-09-15-deep-learning-flux/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/tutorialposts/2020-10-18-transfer-learning/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/blog/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/models/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/blogposts/2018-12-03-ml-language-compiler/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/blogposts/2017-08-24-generic-gpu/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/blogposts/2019-02-07-what-is-differentiable-programming/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/tutorialposts/2021-01-26-mlp/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/blogposts/2021-12-1-flux-numfocus/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/tutorialposts/2021-02-07-convnet/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/blogposts/2023-06-07-metalhead-v0.8/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/blogposts/2019-03-05-dp-vs-rl/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/tutorialposts/2021-01-21-data-loader/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/blogposts/2020-12-20-Flux3D/index.html - 2024-01-31 + 2024-02-01 monthly 0.5 fluxml.ai/tutorialposts/2021-10-08-dcgan-mnist/index.html - 2024-01-31 + 2024-02-01 monthly 0.5