Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keras.io examples conversion gameplan #18468

Open
fchollet opened this issue May 27, 2023 · 21 comments
Open

Keras.io examples conversion gameplan #18468

fchollet opened this issue May 27, 2023 · 21 comments

Comments

@fchollet
Copy link
Collaborator

fchollet commented May 27, 2023

We need to convert keras.io examples to work with Keras 3.

This involves two stages:

Stage 1: tf.keras backwards compatibility check

Keras 3 is intended as a drop-in replacement for tf.keras. We expect most examples to work with no code changes other changing the imports (when using TF as the backend). So the first thing to do with a keras.io example is:

  1. Install Keras 3 (via a git clone followed by python pip_build.py --install).
  2. Run the example. In many cases this involves some manual steps around data downloads and dependency installation. Also, in many cases you should also edit hyperparameters to reduce compute intensiveness in order to be able to debug quickly (e.g. set epochs=1 and steps_per_epoch=3, that sort of thing).
  3. Record anything that doesn't work out of the box and file issues on the Keras 3 GitHub accordingly. We will use these issues to improve the degree of compatibility of Keras 3 going forward. Try to work around each issue you find, so that you can reach the next issue.
  4. Open a PR to commit your converted examples in examples/keras_io/tensorflow/. PLEASE INCLUDE THE GIT DIFF (diff from the original example to the new file) in the PR description.

Note: in some cases this conversion will not be possible. There is some niche functionality that we removed from Keras 3, such as add_metric. When hitting such problems, if unable to work around the issue, simply record the problem in a GitHub issue and move on.

Stage 2: backend-agnostic conversion

Going one step further, once an example runs with the TF backend, we should seek to replace all TF APIs in the example with backend-agnostic keras.ops APIs.

In some cases this is not possible. Keras Core does not have backend-agnostic capabilities for custom train_step or custom training loops. In such cases, you should convert what is convertible, and then fork the example into 2 separate versions: a TF one and a JAX one, using APIs from each framework to implement the low-level functionality.

Keep in mind that it's ok to use TF APIs for data I/O and preprocessing. We only aim to convert modeling and training APIs -- all data preprocessing can stay as-is even if it uses TF. TF is the only feature-complete framework when it comes to data preprocessing, and generally the only viable option for many use cases.

Note on Keras preprocessing layers and tf.data: you can't use Keras 3 preprocessing layers in a tf.data pipeline when using a backend that is not TF. As a result, if you need to use Keras preprocessing layers in tf.data, import them from tf.keras.

Once you have converted an example to use backend-agnostic APIs and run with JAX and TF, open a PR to commit it:

  • in examples/keras_io/tf/ (it should replace the existing one if it is there) or examples/keras_io/jax/ if it's backend-specific.
  • in examples/keras_io/ if it's backend-agnostic.

Let's go!

Assignment - stage 1: conversion to Keras 3 with TF backend

CV

  • Image classification from scratch: @fchollet
  • Simple MNIST convnet: @fchollet
  • Image classification via fine-tuning with EfficientNet @divyashreepathihalli
  • Image classification with Vision Transformer
  • Image Classification using BigTransfer (BiT)
  • Classification using Attention-based Deep Multiple Instance Learning: @hertschuh
  • Image classification with modern MLP models @divyashreepathihalli
  • A mobile-friendly Transformer-based model for image classification
  • Pneumonia Classification on TPU
  • Compact Convolutional Transformers
  • Image classification with ConvMixer
  • Image classification with EANet (External Attention Transformer)
  • Involutional neural networks: @hertschuh
  • Image classification with Perceiver @divyashreepathihalli
  • Few-Shot learning with Reptile
  • Semi-supervised image classification using contrastive pretraining with SimCLR @grasskin
  • Image classification with Swin Transformers @grasskin
  • Train a Vision Transformer on small datasets
  • A Vision Transformer without Attention
  • Image segmentation with a U-Net-like architecture @sampathweb
  • Multiclass semantic segmentation using DeepLabV3+
  • Object Detection with RetinaNet
  • Keypoint Detection with Transfer Learning
  • Object detection with Vision Transformers @grasskin
  • OCR model for reading Captchas @grasskin
  • Handwriting recognition
  • Convolutional autoencoder for image denoising: @fchollet
  • Low-light image enhancement using MIRNet
  • Image Super-Resolution using an Efficient Sub-Pixel CNN
  • Enhanced Deep Residual Networks for single-image super-resolution
  • Zero-DCE for low-light image enhancement
  • CutMix data augmentation for image classification
  • MixUp augmentation for image classification
  • RandAugment for Image Classification for Improved Robustness
  • Image captioning @divyashreepathihalli
  • Natural language image search with a Dual Encoder
  • Visualizing what convnets learn
  • Model interpretability with Integrated Gradients: @AakashKumarNain
  • Investigating Vision Transformer representations
  • Grad-CAM class activation visualization: @fchollet
  • Near-duplicate image search
  • Semantic Image Clustering
  • Image similarity estimation using a Siamese Network with a contrastive loss @hertschuh
  • Image similarity estimation using a Siamese Network with a triplet loss @hertschuh
  • Metric learning for image similarity search: @fchollet
  • [Nedd TF-Similarity] Metric learning for image similarity search using TensorFlow Similarity
  • Video Classification with a CNN-RNN Architecture
  • Next-Frame Video Prediction with Convolutional LSTMs: @fchollet
  • Video Classification with Transformers
  • Video Vision Transformer
  • [Need KerasCV] Semi-supervision and domain adaptation with AdaMatch
  • Barlow Twins for Contrastive SSL
  • Class Attention Image Transformers with LayerScale
  • Consistency training with supervision
  • Distilling Vision Transformers
  • FixRes: Fixing train-test resolution discrepancy
  • Focal Modulation: A replacement for Self-Attention
  • Using the Forward-Forward Algorithm for Image Classification
  • Gradient Centralization for Better Training Performance
  • Knowledge Distillation
  • Learning to Resize in Computer Vision @divyashreepathihalli
  • Masked image modeling with Autoencoders
  • Self-supervised contrastive learning with NNCLR
  • Augmenting convnets with aggregated attention
  • Point cloud segmentation with PointNet
  • Semantic segmentation with SegFormer and Hugging Face Transformers
  • Self-supervised contrastive learning with SimSiam
  • Supervised Contrastive Learning @divyashreepathihalli
  • When Recurrence meets Transformers
  • Learning to tokenize in Vision Transformers

NLP

  • Text classification from scratch: @fchollet
  • Review Classification using Active Learning: @sampathweb
  • Text Classification using FNet
  • Large-scale multi-label text classification
  • Text classification with Transformer @nkovela1
  • Text classification with Switch Transformer @nkovela1
  • Text classification using Decision Forests and pretrained embeddings
  • Using pre-trained word embeddings @sampathweb
  • Bidirectional LSTM on IMDB @sampathweb
  • English-to-Spanish translation with KerasNLP @nkovela1
  • English-to-Spanish translation with a sequence-to-sequence Transformer
  • Character-level recurrent sequence-to-sequence model: @fchollet
  • [Need TF Hub] Multimodal entailment
  • Named Entity Recognition using Transformers
  • Text Extraction with BERT
  • Sequence to sequence learning for performing number addition
  • Semantic Similarity with BERT
  • End-to-end Masked Language Modeling with BERT
  • Pretraining BERT with Hugging Face Transformers
  • Training a language model from scratch with 🤗 Transformers and TPUs
  • Question Answering with Hugging Face Transformers
  • Abstractive Summarization with Hugging Face Transformers

Structured data

  • Structured data classification with FeatureSpace
  • Imbalanced classification: credit card fraud detection: @fchollet
  • Structured data classification from scratch: @fchollet
  • Structured data learning with Wide, Deep, and Cross networks
  • Classification with Gated Residual and Variable Selection Networks
  • Classification with TensorFlow Decision Forests
  • Classification with Neural Decision Forests
  • Structured data learning with TabTransformer
  • Collaborative Filtering for Movie Recommendations: @sampathweb
  • A Transformer-based recommendation system

Timeseries

  • Timeseries classification from scratch @sampathweb
  • Timeseries classification with a Transformer model: @sampathweb
  • Electroencephalogram Signal Classification for action identification: @fchollet
  • Timeseries anomaly detection using an Autoencoder: @fchollet
  • Traffic forecasting using graph neural networks and LSTM: @fchollet
  • Timeseries forecasting for weather prediction: @fchollet

Generative

  • Denoising Diffusion Implicit Models
  • A walk through latent space with Stable Diffusion
  • DreamBooth
  • Denoising Diffusion Probabilistic Models: @fchollet
  • Teach StableDiffusion new concepts via Textual Inversion
  • Fine-tuning Stable Diffusion
  • Variational AutoEncoder: @fchollet
  • GAN overriding model train_step
  • WGAN-GP overriding Model.train_step: @fchollet
  • Conditional GAN
  • CycleGAN
  • Data-efficient GANs with Adaptive Discriminator Augmentation
  • Deep Dream: @fchollet
  • GauGAN for conditional image generation
  • PixelCNN
  • Face image generation with StyleGAN @nkovela1
  • Vector-Quantized Variational Autoencoders
  • Neural style transfer: @fchollet
  • Neural Style Transfer with AdaIN
  • GPT2 Text Generation with KerasNLP @divyashreepathihalli
  • GPT text generation from scratch with KerasNLP
  • [Need KerasNLP] Text generation with a miniature GPT
  • Character-level text generation with LSTM: @fchollet
  • Text Generation using FNet
  • Drug Molecule Generation with VAE
  • WGAN-GP with R-GCN for the generation of small molecular graphs
  • Density estimation using Real NVP

Other

  • Automatic Speech Recognition using CTC
  • MelGAN-based spectrogram inversion using feature matching
  • Speaker Recognition
  • Automatic Speech Recognition with Transformer
  • English speaker accent recognition using Transfer Learning
  • Audio Classification with Hugging Face Transformers
  • Actor Critic Method
  • Deep Deterministic Policy Gradient (DDPG)
  • Deep Q-Learning for Atari Breakout
  • Proximal Policy Optimization
  • Graph attention network (GAT) for node classification
  • Node Classification with Graph Neural Networks
  • Message-passing neural network (MPNN) for molecular property prediction
  • Graph representation learning with node2vec
  • Simple custom layer example: Antirectifier: @fchollet
  • Probabilistic Bayesian Neural Networks
  • Knowledge distillation recipes
  • Creating TFRecords
  • Keras debugging tips
  • Endpoint layer pattern: @fchollet
  • Memory-efficient embeddings for recommendation systems
  • A Quasi-SVM in Keras: @fchollet
  • Estimating required sample size for model training
  • Evaluating and exporting scikit-learn metrics in a Keras callback
  • Customizing the convolution operation of a Conv2D layer: @fchollet
  • Writing Keras Models With TensorFlow NumPy: @fchollet
  • Serving TensorFlow models with TFServing: @fchollet
  • How to train a Keras model on TFRecord files
  • Trainer pattern: @fchollet

List of examples with significant incompatibilities

  • Trainer pattern
    • Reason: trainer subclassing style has significantly change, e.g. no more compiled_loss/compiled_metrics.

List of examples that cannot be converted at all

  • A Quasi-SVM in Keras.
    • Reason: critically uses tf.keras.layers.experimental.RandomFourierFeatures, not included in Keras Core.
@AakashKumarNain
Copy link
Contributor

Wow! Been contributing to these examples for a long time but never realized that we have so many high-quality examples. Amazing feat! 👏

PS: I will setup keras_core GPU env, and will start working on some of the examples I contributed. Will update the issue accordingly

@soumik12345
Copy link
Contributor

Raised a PR to port the example Zero-DCE for low-light image enhancement to keras_core: keras-team/keras-core#486
Also found a possible bug while doing so: keras-team/keras-core#485

@soumik12345
Copy link
Contributor

Raised a PR to port example Low-light image enhancement using MIRNet to keras-core: keras-team/keras-core#491

@anas-rz
Copy link
Contributor

anas-rz commented Jul 17, 2023

Raised PR to port example
A Vision Transformer without Attention: keras-team/keras-core#497

@anas-rz
Copy link
Contributor

anas-rz commented Jul 18, 2023

Raised a PR to port example to keras-core: Compact Convolutional Transformers keras-team/keras-core#523

@pksX01
Copy link
Contributor

pksX01 commented Oct 8, 2023

I would like to take 'Question Answering with Hugging Face Transformers' task.

@pksX01
Copy link
Contributor

pksX01 commented Oct 8, 2023

I would like to take 'Question Answering with Hugging Face Transformers' task.

I am facing an issue while running after changes and I have raised an issue for the same #18572.

@madhusshivakumar
Copy link
Contributor

madhusshivakumar commented Oct 27, 2023

I would like to work on

examples/vision/cait.py #18700
Facing issue #18699

examples/vision/metric_learning.py #18701
Facing issue #18698

@madhusshivakumar
Copy link
Contributor

I was working on movielens recommendations and raised PR for the same #18690

@ben-ad
Copy link

ben-ad commented Nov 12, 2023

I would like to work on "Text extraction with Bert" stage 1.

PS: the status on what has been done already is not up to date in the stage 1 list above.

@sitamgithub-MSIT
Copy link
Contributor

"Image Classification using BigTransfer (BiT)" is the task that I would like to take on. Image Classification using BigTransfer (BiT)

Note: As of right now, I will focus on Stage 1: the tf.keras backward compatibility check. To make the backend agnostic, the remainder will attempt to work on stage 2.

@sitamgithub-MSIT
Copy link
Contributor

"Image Classification using BigTransfer (BiT)" is the task that I would like to take on. Image Classification using BigTransfer (BiT)

Note: As of right now, I will focus on Stage 1: the tf.keras backward compatibility check. To make the backend agnostic, the remainder will attempt to work on stage 2.

Update: Find some issues specifically related to the model at tf.hub (now Kaggle Models). Anyway, I will create a detailed issue of my problems in one or two days.

@sitamgithub-MSIT
Copy link
Contributor

"Train a Vision Transformer on small datasets" is the task that I would like to take on next. Train a Vision Transformer on small datasets

@pksX01
Copy link
Contributor

pksX01 commented Dec 2, 2023

"Train a Vision Transformer on small datasets" is the task that I would like to take on next. Train a Vision Transformer on small datasets

@sitamgithub-MSIT I am already working on this from last couple of days, I had also raised issue which I was facing in this script but now that issue is gone and Stage 1 is already completed. I will raise PR soon.

Please select different problem/ example.

@innat-asj
Copy link

innat-asj commented Dec 2, 2023

I've just noticed this keras example, Semi-supervised image classification using contrastive pretraining with SimCLR ( link ) has changed significantly, original author @beresandras , updated by @ariG23498. The core contributed part (SimCLR modelling) is replaced with built-in API (why). Also, what is the purpose of keras_cv.training, looks uncertain API.

class SimCLRTrainer(keras_cv.training.ContrastiveTrainer):
    def __init__(self, encoder, augmenter, projector, probe=None, **kwargs):
        super().__init__(
            encoder=encoder,
            augmenter=augmenter,
            projector=projector,
            probe=probe,
            **kwargs,
        )

simclr_model = SimCLRTrainer(...)

@ariG23498
Copy link
Contributor

That was part of the Keras Sprint.

CC: @martin-gorner

@sitamgithub-MSIT
Copy link
Contributor

Working on the MixUp augmentation for image classification

@innat-asj
Copy link

That was part of the Keras Sprint.

CC: @martin-gorner

Looks like it (link) is reverted to its original form.

@sitamgithub-MSIT
Copy link
Contributor

Working on the MixUp augmentation for image classification

It seems like it was already converted to Keras 3.0. The above list is not updated, and moreover, on the Keras website, it is showing as Keras 2.0.

CC: @fchollet

@sitamgithub-MSIT
Copy link
Contributor

@innat
Copy link

innat commented Jan 12, 2024

Instead of replacing Keras 2 example with Keras 3, why not keeping both in the code example? As tf.keras is not getting invalid any time soon or is it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests