Skip to content

Commit

Permalink
OOM Fix (#217)
Browse files Browse the repository at this point in the history
* Fixed the MPI initialization issue (#207)

* Bring v1.0 to the most recent commit  (#202)

* Request changes from MLPerf Storage (#199)

* added au metric to the configuration file; set shuffling and shuffle buffer size to be 2 for cosmoflow

* removed dependencies on dlioprofiler

* fixed bugs

* Fixed potential insufficient samples due to num_files is not divisible by comm.size (#200)

* added au metric to the configuration file; set shuffling and shuffle buffer size to be 2 for cosmoflow

* removed dependencies on dlioprofiler

* fixed bugs

* recovered back dlio_profiler

* fixed potential not enough samples

* Update tf_reader.py

* Mlperf requests (#201)

* added au metric to the configuration file; set shuffling and shuffle buffer size to be 2 for cosmoflow

* removed dependencies on dlioprofiler

* fixed bugs

* fixed issue with dlio_profiler

* bring back dlio_profiler_py

* sync up (#205)

* Request changes from MLPerf Storage (#199)

* added au metric to the configuration file; set shuffling and shuffle buffer size to be 2 for cosmoflow

* removed dependencies on dlioprofiler

* fixed bugs

* Fixed potential insufficient samples due to num_files is not divisible by comm.size (#200)

* added au metric to the configuration file; set shuffling and shuffle buffer size to be 2 for cosmoflow

* removed dependencies on dlioprofiler

* fixed bugs

* recovered back dlio_profiler

* fixed potential not enough samples

* Update tf_reader.py

* Mlperf requests (#201)

* added au metric to the configuration file; set shuffling and shuffle buffer size to be 2 for cosmoflow

* removed dependencies on dlioprofiler

* fixed bugs

* fixed issue with dlio_profiler

* bring back dlio_profiler_py

* Bring v1.0 to the most recent commit  (#202) (#203)

* Request changes from MLPerf Storage (#199)

* added au metric to the configuration file; set shuffling and shuffle buffer size to be 2 for cosmoflow

* removed dependencies on dlioprofiler

* fixed bugs

* Fixed potential insufficient samples due to num_files is not divisible by comm.size (#200)

* added au metric to the configuration file; set shuffling and shuffle buffer size to be 2 for cosmoflow

* removed dependencies on dlioprofiler

* fixed bugs

* recovered back dlio_profiler

* fixed potential not enough samples

* Update tf_reader.py

* Mlperf requests (#201)

* added au metric to the configuration file; set shuffling and shuffle buffer size to be 2 for cosmoflow

* removed dependencies on dlioprofiler

* fixed bugs

* fixed issue with dlio_profiler

* bring back dlio_profiler_py

* Fix requirements file (#204)

Signed-off-by: Johnu George <[email protected]>

---------

Signed-off-by: Johnu George <[email protected]>
Co-authored-by: Johnu George <[email protected]>

* barrier in the beginning

* fixed bugs

* fixed MPI initilization issue

---------

Signed-off-by: Johnu George <[email protected]>
Co-authored-by: Johnu George <[email protected]>

* Switch DLIO Profiler to DFTracer. (#208)

* Switch DLIO Profiler to DFTracer.

* Github workflow fixed.

* DFTracer pip package address fixed.

* switch mpi to openmpi

* added build debug so that symbols are shown.

* added DEBUG logging for DFTRACER

* removed explicit install of DFTracer as it is done during pip.

* added clanup code.

* ci cleanup

* Updated for DFTracer changes

* switched to 1.0.1

Due to a simple pypi bug

* Switch to release 1.0.2

* Package info updated. (#210)

* Update dali_tfrecord_reader.py to import PerfTrace and Profile from utils.utility

* Update dali_npy_reader.py

* Update dali_image_reader.py

* Update custom_npz_reader.py

* Update custom_torch_data_loader.py [ci-skip]

* Update pytorch_checkpointing.py [ci-skip]

* Update native_dali_data_loader.py [ci-skip]

* Publish on PyPI (#211)

* PyPI publish workflow added.

* `requirements.txt` cleaned up.

* Action names refactored.

* Missing requirements fixed.

* `setup.py` requirement versions refactored.

* CI support for running via `requirements.txt`.

* `VENV_PATH` fixed in CI script.

* CI script conditional executable fix.

* Cleared redundant CI matrix options.

---------

Co-authored-by: Izzet Yildirim <[email protected]>
Co-authored-by: Izzet Yildirim <[email protected]>
Co-authored-by: Huihuo Zheng <[email protected]>

* Fix README CI badge (#212)

* Ignore file indexing for native data loader.

The sample building and native data loader case is needed only for DLIO created data loaders. For native data loaders which provide their own API;s they provide their own indexing and there this sampling can be ignored.

---------

Signed-off-by: Johnu George <[email protected]>
Co-authored-by: Johnu George <[email protected]>
Co-authored-by: Hariharan Devarajan <[email protected]>
Co-authored-by: Izzet Yildirim <[email protected]>
Co-authored-by: Izzet Yildirim <[email protected]>
Co-authored-by: hariharandev1 <[email protected]>
  • Loading branch information
6 people authored Aug 6, 2024
1 parent bc693c6 commit 2fcdbfa
Show file tree
Hide file tree
Showing 27 changed files with 435 additions and 459 deletions.
53 changes: 53 additions & 0 deletions .github/workflows/cd.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
name: Release

on:
release:
types: [published]

permissions:
contents: read

jobs:
release-build:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- uses: actions/setup-python@v5
with:
python-version: "3.x"

- name: Build release distributions
run: |
# NOTE: put your own distribution build steps here.
python -m pip install build
python -m build
- name: Upload distributions
uses: actions/upload-artifact@v4
with:
name: release-dists
path: dist/

pypi-publish:
runs-on: ubuntu-latest

needs:
- release-build

permissions:
id-token: write

steps:
- name: Retrieve release distributions
uses: actions/download-artifact@v4
with:
name: release-dists
path: dist/

- name: Publish release distributions to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
user: __token__
password: ${{ secrets.PYPI_DLIO_TOKEN }}
241 changes: 241 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,241 @@
name: Build and Test

on:
pull_request:
branches: [main, dev]
push:

jobs:
build-and-test:
strategy:
fail-fast: false
matrix:
os: [ubuntu-22.04]
gcc: [10]
python: ["3.9", "3.10", "3.11"]
venv: ["via-setup", "via-reqs"]
name: ${{ matrix.os }}-${{ matrix.gcc }}-${{ matrix.python }}-${{ matrix.venv }}
runs-on: ${{ matrix.os }}
env:
CC: gcc-${{ matrix.gcc }}
CXX: g++-${{ matrix.gcc }}
DFTRACER_BUILD_TYPE: "Debug"
DFTRACER_ENABLE: 1
DFTRACER_LOG_LEVEL: "DEBUG"
DLIO_EXEC: ${{ matrix.venv == 'via-setup' && 'dlio_benchmark' || 'python dlio_benchmark/main.py' }}
GOTCHA_DEBUG: 3
OMPI_ALLOW_RUN_AS_ROOT: 1
OMPI_ALLOW_RUN_AS_ROOT_CONFIRM: 1
PYTHON_VER: ${{ matrix.python }}
RDMAV_FORK_SAFE: "1"
VENV_PATH: "/home/runner/work/.venv/${{ matrix.venv }}"
steps:
- name: Clear disc
run: |
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf "/usr/local/share/boost"
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
- name: Push checkout
if: github.event_name == 'push'
uses: actions/checkout@v3
- name: PR checkout
if: github.event_name == 'pull_request'
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ matrix.python }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python }}
- name: Add current directory to PYTHONPATH
if: matrix.venv == 'via-reqs'
run: echo "PYTHONPATH=$(pwd):$PYTHONPATH" >> $GITHUB_ENV
- name: Cache install modules
id: cache-modules
uses: actions/cache@v3
with:
path: ${{ env.VENV_PATH }}
key: ${{ matrix.venv }}-gcc${{ matrix.gcc }}-python${{ matrix.python }}-${{ hashFiles('requirements.txt', 'setup.py') }}
- name: Install system dependencies
run: |
sudo apt update
sudo apt-get install -y $CC $CXX libc6 git
sudo apt-get install -y openmpi-bin openmpi-common libopenmpi-dev python3-dev
- name: Install DLIO via setup.py
if: matrix.venv == 'via-setup' && steps.cache-modules.outputs.cache-hit != 'true'
run: |
echo "venv: ${VENV_PATH} - gcc: $CC"
python -m venv ${VENV_PATH}
source ${VENV_PATH}/bin/activate
pip install --upgrade pip
pip install .[test]
- name: Install DLIO via requirements.txt
if: matrix.venv == 'via-reqs' && steps.cache-modules.outputs.cache-hit != 'true'
run: |
echo "venv: ${VENV_PATH} - gcc: $CC"
python -m venv ${VENV_PATH}
source ${VENV_PATH}/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
- name: test_gen_data
run: |
source ${VENV_PATH}/bin/activate
mpirun -np 2 pytest -k test_gen_data[png-tensorflow] -v
mpirun -np 2 pytest -k test_gen_data[npz-tensorflow] -v
mpirun -np 2 pytest -k test_gen_data[jpeg-tensorflow] -v
mpirun -np 2 pytest -k test_gen_data[tfrecord-tensorflow] -v
mpirun -np 2 pytest -k test_gen_data[hdf5-tensorflow] -v
mpirun -np 2 pytest -k test_gen_data[indexed_binary-tensorflow] -v
mpirun -np 2 pytest -k test_gen_data[mmap_indexed_binary-tensorflow] -v
rm -rf data
- name: test_custom_storage_root_gen_data
run: |
source ${VENV_PATH}/bin/activate
mpirun -np 2 pytest -k test_storage_root_gen_data[png-tensorflow] -v
mpirun -np 2 pytest -k test_storage_root_gen_data[npz-tensorflow] -v
mpirun -np 2 pytest -k test_storage_root_gen_data[jpeg-tensorflow] -v
mpirun -np 2 pytest -k test_storage_root_gen_data[tfrecord-tensorflow] -v
mpirun -np 2 pytest -k test_storage_root_gen_data[hdf5-tensorflow] -v
mpirun -np 2 pytest -k test_storage_root_gen_data[indexed_binary-tensorflow] -v
mpirun -np 2 pytest -k test_storage_root_gen_data[mmap_indexed_binary-tensorflow] -v
rm -rf data
- name: test_train
run: |
source ${VENV_PATH}/bin/activate
mpirun -np 2 pytest -k test_train[png-tensorflow-tensorflow] -v
mpirun -np 2 pytest -k test_train[npz-tensorflow-tensorflow] -v
mpirun -np 2 pytest -k test_train[jpeg-tensorflow-tensorflow] -v
mpirun -np 2 pytest -k test_train[tfrecord-tensorflow-tensorflow] -v
mpirun -np 2 pytest -k test_train[hdf5-tensorflow-tensorflow] -v
mpirun -np 2 pytest -k test_train[csv-tensorflow-tensorflow] -v
mpirun -np 2 pytest -k test_train[png-pytorch-pytorch] -v
mpirun -np 2 pytest -k test_train[npz-pytorch-pytorch] -v
mpirun -np 2 pytest -k test_train[jpeg-pytorch-pytorch] -v
mpirun -np 2 pytest -k test_train[hdf5-pytorch-pytorch] -v
mpirun -np 2 pytest -k test_train[csv-pytorch-pytorch] -v
mpirun -np 2 pytest -k test_train[png-tensorflow-dali] -v
mpirun -np 2 pytest -k test_train[npz-tensorflow-dali] -v
mpirun -np 2 pytest -k test_train[jpeg-tensorflow-dali] -v
mpirun -np 2 pytest -k test_train[hdf5-tensorflow-dali] -v
mpirun -np 2 pytest -k test_train[csv-tensorflow-dali] -v
mpirun -np 2 pytest -k test_train[png-pytorch-dali] -v
mpirun -np 2 pytest -k test_train[npz-pytorch-dali] -v
mpirun -np 2 pytest -k test_train[jpeg-pytorch-dali] -v
mpirun -np 2 pytest -k test_train[hdf5-pytorch-dali] -v
mpirun -np 2 pytest -k test_train[csv-pytorch-dali] -v
mpirun -np 2 pytest -k test_train[indexed_binary-tensorflow-tensorflow] -v
mpirun -np 2 pytest -k test_train[indexed_binary-pytorch-pytorch] -v
mpirun -np 2 pytest -k test_train[indexed_binary-tensorflow-dali] -v
mpirun -np 2 pytest -k test_train[indexed_binary-pytorch-dali] -v
mpirun -np 2 pytest -k test_train[mmap_indexed_binary-tensorflow-tensorflow] -v
mpirun -np 2 pytest -k test_train[mmap_indexed_binary-pytorch-pytorch] -v
mpirun -np 2 pytest -k test_train[mmap_indexed_binary-tensorflow-dali] -v
mpirun -np 2 pytest -k test_train[mmap_indexed_binary-pytorch-dali] -v
rm -rf data
- name: test_custom_storage_root_train
run: |
source ${VENV_PATH}/bin/activate
mpirun -np 2 pytest -k test_custom_storage_root_train[png-tensorflow] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[npz-tensorflow] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[jpeg-tensorflow] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[tfrecord-tensorflow] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[hdf5-tensorflow] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[csv-tensorflow] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[png-pytorch] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[npz-pytorch] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[jpeg-pytorch] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[hdf5-pytorch] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[csv-pytorch] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[indexed_binary-tensorflow] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[indexed_binary-pytorch] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[mmap_indexed_binary-tensorflow] -v
mpirun -np 2 pytest -k test_custom_storage_root_train[mmap_indexed_binary-pytorch] -v
rm -rf data
- name: test_checkpoint_epoch
run: |
source ${VENV_PATH}/bin/activate
mpirun -np 2 pytest -k test_checkpoint_epoch[tensorflow-1024-optimizers0-2-layer_params0-all_ranks] -v
mpirun -np 2 pytest -k test_checkpoint_epoch[pytorch-1024-optimizers1-2-layer_params1-all_ranks] -v
mpirun -np 2 pytest -k test_checkpoint_epoch[tensorflow-1024-optimizers2-2-layer_params2-rank_zero] -v
mpirun -np 2 pytest -k test_checkpoint_epoch[pytorch-1024-optimizers3-2-layer_params3-rank_zero] -v
mpirun -np 2 pytest -k test_checkpoint_epoch[tensorflow-1024-optimizers4-1-layer_params4-all_ranks] -v
mpirun -np 2 pytest -k test_checkpoint_epoch[pytorch-1024-optimizers5-1-layer_params5-all_ranks] -v
rm -rf data
- name: test_checkpoint_step
run: |
source ${VENV_PATH}/bin/activate
mpirun -np 2 pytest -k test_checkpoint_step -v
- name: test_eval
run: |
source ${VENV_PATH}/bin/activate
mpirun -np 2 pytest -k test_eval -v
- name: test_multi_threads
run: |
source ${VENV_PATH}/bin/activate
mpirun -np 2 pytest -k test_multi_threads[tensorflow-0] -v
mpirun -np 2 pytest -k test_multi_threads[tensorflow-1] -v
mpirun -np 2 pytest -k test_multi_threads[tensorflow-2] -v
mpirun -np 2 pytest -k test_multi_threads[pytorch-0] -v
mpirun -np 2 pytest -k test_multi_threads[pytorch-1] -v
mpirun -np 2 pytest -k test_multi_threads[pytorch-2] -v
rm -rf data
- name: test-pytorch-multiprocessing-context
run: |
source ${VENV_PATH}/bin/activate
mpirun -np 2 pytest -k test_pytorch_multiprocessing_context[0-None] -v
mpirun -np 2 pytest -k test_pytorch_multiprocessing_context[1-fork] -v
mpirun -np 2 pytest -k test_pytorch_multiprocessing_context[2-forkserver] -v
mpirun -np 2 pytest -k test_pytorch_multiprocessing_context[2-spawn] -v
rm -rf data
- name: test_subset
run: |
source ${VENV_PATH}/bin/activate
rm -rf output data checkpoints
mpirun -np 2 pytest -k test_subset -v
rm -rf data
- name: test-tf-loader-tfrecord
run: |
source ${VENV_PATH}/bin/activate
rm -rf output data checkpoints
mpirun -np 2 ${DLIO_EXEC} workload=resnet50_tf ++workload.dataset.num_files_train=64 ++workload.workflow.train=False ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=4 ++workload.dataset.num_samples_per_file=16
mpirun -np 2 ${DLIO_EXEC} workload=resnet50_tf ++workload.dataset.num_files_train=64 ++workload.workflow.train=True ++workload.workflow.generate_data=False ++workload.dataset.num_files_train=4 ++workload.dataset.num_samples_per_file=16 ++workload.train.computation_time=0.01 ++workload.train.epochs=1
rm -rf data
- name: test-torch-loader-npz
run: |
source ${VENV_PATH}/bin/activate
rm -rf output data checkpoints
mpirun -np 2 ${DLIO_EXEC} workload=unet3d_a100 ++workload.train.computation_time=0.05 ++workload.evaluation.eval_time=0.01 ++workload.workflow.train=False ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=8 ++workload.dataset.num_files_eval=8 ++workload.reader.read_threads=2 ++workload.dataset.record_length=4096 ++workload.dataset.record_length_stdev=0
mpirun -np 2 ${DLIO_EXEC} workload=unet3d_a100 ++workload.train.computation_time=0.05 ++workload.evaluation.eval_time=0.01 ++workload.train.epochs=1 ++workload.workflow.train=True ++workload.workflow.generate_data=False ++workload.dataset.num_files_train=8 ++workload.dataset.num_files_eval=8 ++workload.reader.read_threads=0 ++workload.dataset.record_length=4096 ++workload.dataset.record_length_stdev=0
rm -rf data
- name: test-tf-loader-npz
run: |
source ${VENV_PATH}/bin/activate
rm -rf output data checkpoints
mpirun -np 2 ${DLIO_EXEC} workload=unet3d_a100 ++workload.framework=tensorflow ++workload.data_reader.data_loader=tensorflow ++workload.train.computation_time=0.05 ++workload.evaluation.eval_time=0.01 ++workload.train.epochs=2 ++workload.workflow.train=False ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=16 ++workload.dataset.num_files_eval=16 ++workload.reader.read_threads=2 ++workload.dataset.record_length=4096 ++workload.dataset.record_length_stdev=0
mpirun -np 2 ${DLIO_EXEC} workload=unet3d_a100 ++workload.framework=tensorflow ++workload.data_reader.data_loader=tensorflow ++workload.train.computation_time=0.05 ++workload.evaluation.eval_time=0.01 ++workload.train.epochs=2 ++workload.workflow.train=True ++workload.workflow.generate_data=False ++workload.dataset.num_files_train=16 ++workload.dataset.num_files_eval=16 ++workload.reader.read_threads=2 ++workload.dataset.record_length=4096 ++workload.dataset.record_length_stdev=0
rm -rf data
- name: test_unet3d
run: |
source ${VENV_PATH}/bin/activate
rm -rf output data checkpoints
mpirun -np 2 ${DLIO_EXEC} workload=unet3d_a100 ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=42
mpirun -np 2 ${DLIO_EXEC} workload=unet3d_h100 ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=42
mpirun -np 2 ${DLIO_EXEC} workload=unet3d_h100 ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=42 ++workload.dataset.format=synthetic
rm -rf data
- name: test_resnet50
run: |
source ${VENV_PATH}/bin/activate
rm -rf output data checkpoints
mpirun -np 2 ${DLIO_EXEC} workload=resnet50_a100 ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=4
mpirun -np 2 ${DLIO_EXEC} workload=resnet50_h100 ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=4
mpirun -np 2 ${DLIO_EXEC} workload=resnet50_h100 ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=4 ++workload.dataset.format=synthetic
rm -rf data
- name: test_cosmoflow
run: |
source ${VENV_PATH}/bin/activate
rm -rf output data checkpoints
mpirun -np 2 ${DLIO_EXEC} workload=cosmoflow_a100 ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=16
mpirun -np 2 ${DLIO_EXEC} workload=cosmoflow_h100 ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=16
mpirun -np 2 ${DLIO_EXEC} workload=cosmoflow_h100 ++workload.workflow.generate_data=True ++workload.dataset.num_files_train=16 ++workload.dataset.format=synthetic
rm -rf data
4 changes: 2 additions & 2 deletions .github/workflows/jekyll-gh-pages.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Sample workflow for building and deploying a Jekyll site to GitHub Pages
name: Deploy Jekyll with GitHub Pages dependencies preinstalled
name: Deploy Documentation

on:
# Runs on pushes targeting the default branch
Expand Down Expand Up @@ -51,5 +51,5 @@ jobs:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v1
with:
with:
folder: _build/html/
Loading

0 comments on commit 2fcdbfa

Please sign in to comment.