Skip to content

Commit

Permalink
[TASK] Remove LSTM runtime from model
Browse files Browse the repository at this point in the history
GitOrigin-RevId: 402e8add0164939694ead858f9da74034f550fbf
  • Loading branch information
mckornfield committed Feb 13, 2025
1 parent 96fac2e commit 37e2614
Show file tree
Hide file tree
Showing 18 changed files with 5 additions and 1,714 deletions.
53 changes: 0 additions & 53 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,14 +37,12 @@ This section will guide you through installation of `gretel-synthetics` and depe
By default, we do not install certain core requirements, the following dependencies should be installed _external to the installation_
of `gretel-synthetics`, depending on which model(s) you plan to use.

- Tensorflow: Used by the LSTM model, we recommend version 2.12.x
- Torch: Used by Timeseries DGAN and ACTGAN (for ACTGAN, Torch is installed by SDV), we recommend version 2.0
- SDV (Synthetic Data Vault): Used by ACTGAN, we recommend version 0.17.x

These dependencies can be installed by doing the following:

```
pip install tensorflow==2.12.1 # for LSTM
pip install sdv<0.18 # for ACTGAN
pip install torch==2.0 # for Timeseries DGAN
```
Expand Down Expand Up @@ -114,54 +112,3 @@ Keep in mind that this will also install several dependencies like PyTorch that
versions installed for use with other models like Timeseries DGAN.

The ACTGAN interface is a superset of the CTGAN interface. To see the additional features, please take a look at the ACTGAN demo notebook in the `examples` directory of this repo.

## LSTM Overview

This package allows developers to quickly get immersed with synthetic data generation through the use of neural networks. The more complex pieces of working with libraries like Tensorflow and differential privacy are bundled into friendly Python classes and functions. There are two high level modes that can be utilized.

### Simple Mode

The simple mode will train line-per-line on an input file of text. When generating data, the generator will yield a custom object that can be used a variety of different ways based on your use case. [This notebook](https://github.com/gretelai/gretel-synthetics/blob/master/examples/tensorflow/simple-character-model.ipynb) demonstrates this mode.

### DataFrame Mode

This library supports CSV / DataFrames natively using the DataFrame "batch" mode. This module provided a wrapper around our simple mode that is geared for working with tabular data. Additionally, it is capable of handling a high number of columns by breaking the input DataFrame up into "batches" of columns and training a model on each batch. [This notebook](https://github.com/gretelai/gretel-synthetics/blob/master/examples/dataframe_batch.ipynb) shows an overview of using this library with DataFrames natively.

### Components

There are four primary components to be aware of when using this library.

1. Configurations. Configurations are classes that are specific to an underlying ML engine used to train and generate data. An example would be using `TensorFlowConfig` to create all the necessary parameters to train a model based on TF. `LocalConfig` is aliased to `TensorFlowConfig` for backwards compatibility with older versions of the library. A model is saved to a designated directory, which can optionally be archived and utilized later.

2. Tokenizers. Tokenizers convert input text into integer based IDs that are used by the underlying ML engine. These tokenizers can be created and sent to the training input. This is optional, and if no specific tokenizer is specified then a default one will be used. You can find [an example](https://github.com/gretelai/gretel-synthetics/blob/master/examples/tensorflow/batch-df-char-tokenizer.ipynb) here that uses a simple char-by-char tokenizer to build a model from an input CSV. When training in a non-differentially private mode, we suggest using the default `SentencePiece` tokenizer, an unsupervised tokenizer that learns subword units (e.g., **byte-pair-encoding (BPE)** [[Sennrich et al.](http://www.aclweb.org/anthology/P16-1162)]) and **unigram language model** [[Kudo.](https://arxiv.org/abs/1804.10959)]) for faster training and increased accuracy of the synthetic model.

3. Training. Training a model combines the configuration and tokenizer and builds a model, which is stored in the designated directory, that can be used to generate new records.

4. Generation. Once a model is trained, any number of new lines or records can be generated. Optionally, a record validator can be provided to ensure that the generated data meets any constraints that are necessary. See our notebooks for examples on validators.

### Utilities

In addition to the four primary components, the `gretel-synthetics` package also ships with a set of utilities that are helpful for training advanced synthetics models and evaluating synthetic datasets.

Some of this functionality carries large dependencies, so they are shipped as an extra called `utils`. To install these dependencies, you may run

```
pip install gretel-synthetics[utils]
```

For additional details, please refer to the [Utility module API docs](https://synthetics.docs.gretel.ai/en/latest/utils/index.html).

### Differential Privacy

Differential privacy support for our TensorFlow mode is built on the great work being done by the Google TF team and their [TensorFlow Privacy library](https://github.com/tensorflow/privacy).

When utilizing DP, we currently recommend using the character tokenizer as it will only create a vocabulary of single tokens and removes the risk of sensitive data being memorized as actual tokens that can be replayed during generation.

There are also a few configuration options that are notable such as:

- `predict_batch_size` should be set to 1
- `dp` should be enabled
- `learning_rate`, `dp_noise_multiplier`, `dp_l2_norm_clip`, and `dp_microbatches` can be adjusted to achieve various epsilon values.
- `reset_states` should be disabled

Please see our [example Notebook](https://github.com/gretelai/gretel-synthetics/blob/master/examples/tensorflow/diff_privacy.ipynb) for training a DP model based on the [Netflix Prize](https://en.wikipedia.org/wiki/Netflix_Prize) dataset.
5 changes: 1 addition & 4 deletions requirements/base.txt
Original file line number Diff line number Diff line change
@@ -1,10 +1,7 @@
category-encoders==2.4.0
joblib==1.4.2
numpy>=1.18.0,<1.24
packaging<22
pandas>=1.1.0,<2
protobuf>=4,<=4.24.0
rdt>=1.2,<1.3
sdv>=0.17,<0.18
sentencepiece==0.2.0
smart_open>=2.1.0,<8.0
tqdm<5.0
7 changes: 0 additions & 7 deletions requirements/tensorflow.txt

This file was deleted.

4 changes: 1 addition & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,7 @@ def reqs(file, without=None):
utils_reqs = reqs("requirements/utils.txt")
test_reqs = reqs("requirements/test.txt")
torch_reqs = reqs("requirements/torch.txt")
tf_reqs = reqs("requirements/tensorflow.txt")
all_reqs = [base_reqs, utils_reqs, torch_reqs, tf_reqs]
all_reqs = [base_reqs, utils_reqs, torch_reqs]

setup(
name="gretel-synthetics",
Expand All @@ -56,7 +55,6 @@ def reqs(file, without=None):
"utils": utils_reqs,
"test": test_reqs,
"torch": torch_reqs,
"tensorflow": tf_reqs,
"docs": doc_reqs,
},
classifiers=[
Expand Down
19 changes: 3 additions & 16 deletions src/gretel_synthetics/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,8 @@
from pathlib import Path
from typing import Callable, Optional, TYPE_CHECKING

import tensorflow as tf

import gretel_synthetics.const as const

from gretel_synthetics.tensorflow.generator import TensorFlowGenerator
from gretel_synthetics.tensorflow.train import train_rnn

logging.basicConfig(
format="%(asctime)s : %(threadName)s : %(levelname)s : %(message)s",
level=logging.INFO,
Expand Down Expand Up @@ -280,12 +275,6 @@ class TensorFlowConfig(BaseConfig):

def __post_init__(self):
if self.dp:
major, minor, _ = tf.__version__.split(".")
if (int(major), int(minor)) < (2, 4):
raise RuntimeError(
"Running in differential privacy mode requires TensorFlow 2.4.x or greater. "
"Please see the README for details"
)

# TODO: To enable micro-batch size greater than 1, we need to update the differential privacy
# optimizer loss function to compute the vector of per-example losses, rather than the mean
Expand Down Expand Up @@ -319,15 +308,13 @@ def __post_init__(self):
super().__post_init__()

def get_generator_class(self):
return TensorFlowGenerator
return None

def get_training_callable(self):
return train_rnn
return None

def gpu_check(self):
device_name = tf.test.gpu_device_name()
if not device_name.startswith("/device:GPU:"):
logging.warning("***** GPU not found, CPU will be used instead! *****")
pass


#################
Expand Down
Empty file.
61 changes: 0 additions & 61 deletions src/gretel_synthetics/tensorflow/default_model.py

This file was deleted.

159 changes: 0 additions & 159 deletions src/gretel_synthetics/tensorflow/dp_model.py

This file was deleted.

Loading

0 comments on commit 37e2614

Please sign in to comment.