Releases: yahoo/TensorFlowOnSpark
Releases · yahoo/TensorFlowOnSpark
v2.2.5
v2.2.4
- Added option to defer releasing temporary socket/port to user map_function for cases where user code may not bind to the assigned port soon enough to avoid other processes binding to the same port, e.g. extensive pre-processing before invoking TF APIs.
- Updated screwdriver.cd build template.
- Trigger documentation publish after PyPI push.
v2.2.3
v2.2.2
v2.2.1
- Added support for port ranges in
TFOS_SERVER_PORT
environment variable. - Updated
mnist/keras/mnist_tf.py
example with workaround for tensorflow datasets issue. - Added more detailed error message for missing
executor_id
. - Added unit tests for gpu allocation variants.
v2.2.0
- Added support for Spark 3.0 GPU resources
- Updated to support Spark 2.4.5
- Fixed dataset ordering in
mnist_inference.py
examples (thanks to @qsbao) - Added optional environment variables to configure TF server/grpc ports and TensorBoard ports on executors
- Fixed bug with
TFNode.start_cluster_server
in backwards-compatibility code for TF1.x - Fixed file conflict issue with
compat.export_saved_model
in TF2.1 - Removed support for Python 2.x
v2.1.3
v2.1.2
v2.1.1
- added
compat.is_gpu_available()
method to use:tf.config.list_logical_devices('GPU')
(for TF2.1)tf.test.is_cuda_available()
(for earlier versions of TF).
- added ability to launch TensorBoard on
chief:0
ormaster:0
nodes (for small clusters withoutworker
nodes).
v2.1.0
- Added
compat
module to manage minor API changes in TensorFlow. - Added compatibility for TF2.1.0rc0 (exporting saved_models and configuring auto-shard policy)
- Re-introduced compatibility for TF1.x (except support for InputMode.TENSORFLOW in the ML Pipeline API).
- Added TFParallel class for parallelized single-node inferencing via Spark executors.
- Updated examples for TF API changes.
- Updated to use module-level loggers.