Skip to content

Commit

Permalink
Correct batch processing section documentation and native multiproces…
Browse files Browse the repository at this point in the history
…sing.
  • Loading branch information
saurabheights committed Sep 20, 2023
1 parent 3a0a6e9 commit a5b5aff
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 10 deletions.
11 changes: 8 additions & 3 deletions docs/tutorial/reconstruction_system/make_fragments.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,11 +86,16 @@ Batch processing

.. literalinclude:: ../../../examples/python/reconstruction_system/make_fragments.py
:language: python
:lineno-start: 181
:lines: 27,182-205
:start-at: def process_single_fragment(fragment_id, color_files, depth_files, n_files,
:linenos:
:lineno-match:

The ``process_single_fragment`` function calls each individual function explained above.
The ``run`` function determines the number of fragments to generate based on the number
of images in the dataset and the configuration value ``n_frames_per_fragment``.
Subsequently, it invokes ``process_single_fragment`` for each of these fragments.
Furthermore, it leverages multiprocessing to speed up computation of all fragments.

The main function calls each individual function explained above.

.. _reconstruction_system_make_fragments_results:

Expand Down
16 changes: 9 additions & 7 deletions examples/python/reconstruction_system/make_fragments.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
# examples/python/reconstruction_system/make_fragments.py

import math
import multiprocessing
import os, sys
import numpy as np
import open3d as o3d
Expand Down Expand Up @@ -172,13 +173,14 @@ def run(config):
math.ceil(float(n_files) / config['n_frames_per_fragment']))

if config["python_multi_threading"] is True:
from joblib import Parallel, delayed
import multiprocessing
import subprocess
MAX_THREAD = min(multiprocessing.cpu_count(), n_fragments)
Parallel(n_jobs=MAX_THREAD)(delayed(process_single_fragment)(
fragment_id, color_files, depth_files, n_files, n_fragments, config)
for fragment_id in range(n_fragments))
max_workers = min(max(1, multiprocessing.cpu_count() - 1), n_fragments)
# Prevent over allocation of open mp threads in child processes
os.environ['OMP_NUM_THREADS'] = '1'
mp_context = multiprocessing.get_context('spawn')
with mp_context.Pool(processes=max_workers) as pool:
args = [(fragment_id, color_files, depth_files, n_files,
n_fragments, config) for fragment_id in range(n_fragments)]
pool.starmap(process_single_fragment, args)
else:
for fragment_id in range(n_fragments):
process_single_fragment(fragment_id, color_files, depth_files,
Expand Down

0 comments on commit a5b5aff

Please sign in to comment.