Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

WIP: TF threading #5

Open
wants to merge 10,000 commits into
base: master
Choose a base branch
from
Open

WIP: TF threading #5

wants to merge 10,000 commits into from

Conversation

rsdubtso
Copy link

No description provided.

andyly and others added 24 commits July 15, 2019 12:10
…cLayoutOptimizer.

PiperOrigin-RevId: 258198838
  Sum_{i=1,...,k} Precision@i(e) * I(i, e) / D(k, e)
where I(i, e) is 1 if label of rank i for e is a positive label for e, and 0 otherwise,
and the denominator D(k, e) is the maximum possible number of terms in the sum with a non-zero I(i, e): D(k, e) = min(k, number positive labels for e).

While this formula is implemented correctly in `average_precision_at_k` when `labels` is a sparse tensor, it is incorrect when `labels` is a dense tensor with some values out of range [0, num_classes), necessary in the case of examples with a different number of positive labels. In this case, the current implementation defines the denominator as
D(k, e) = min(k, size of last dimension of `labels`).

Consider 2 examples e_1 and e_2 with positive labels {0, 1} and {0}, assume that the top ranked label for e_2 is 0. AveragePrecision@2(e_2) = 1 and is computed correctly when the labels are given by a sparse tensor, but when the labels are given by a dense tensor, e.g. [[0, 1], [0, -1]], then AveragePrecision@2(e_2) = 0.5, as D(i, e_2) is considered min(2, 2).

This CL corrects this error.

PiperOrigin-RevId: 258200333
…n data edge" for If/While nodes as well.

PiperOrigin-RevId: 258205803
This change also adds a test that covers the error, i.e. attempting to pass in
tensors with different shapes across runs to a particular instance of a
collective gather op.

PiperOrigin-RevId: 258207891
PiperOrigin-RevId: 258208153
…h op for rebatching.

PiperOrigin-RevId: 258211187
…en loading, and vice versa.

Support for this was (perhaps unintentially) added in cl/242345741.

PiperOrigin-RevId: 258218701
This will make the code correctly fallback to use existing train_array
or train_generator.

PiperOrigin-RevId: 258227722
…ef to MLIR

Otherwise, it will fail to find the input node later in InferMainFunctionType method. We should preserve input nodes even if unused so that user can feed inputs, as specified.

PiperOrigin-RevId: 258230459
This update fixes few TensorChipping and TensorSlicing regressions.

PiperOrigin-RevId: 258238409
PiperOrigin-RevId: 258239245
…tfrun_removal

PiperOrigin-RevId: 258244067
…from bazelrc file

They are no longer needed and will prevent future incompatible migration for RBE build.

Related bazelbuild/bazel#7480

PiperOrigin-RevId: 258244467
qlzh727 and others added 25 commits July 18, 2019 09:54
1. Format the debug string to have newline before print out the proto
debug string.
2. Change the node generation ordering for output identity op. Now it
will be in an increasing order.
3. Update the typo in the test.

PiperOrigin-RevId: 258787166
…requested dtype

Prior to this change the conversion logic was duplicated between
EagerTensor_init and ConvertToTensor in pywrap_tfe_src.cc

PiperOrigin-RevId: 258787751
PiperOrigin-RevId: 258790736
…flow

cases. Earlier we tried to remove it in all cases but that doesn't work because
if we create a table in a control flow setting then we can't run the
initializer outside anymore.

Fixes tensorflow#29872 and
tensorflow#27086.

PiperOrigin-RevId: 258792706
…re headers require it.

PiperOrigin-RevId: 258793968
…upstream_rocm_platform_fix_190717

PiperOrigin-RevId: 258800263
PiperOrigin-RevId: 258803932
They are statistical tests for very low probability regions, which is not very
practically useful.

PiperOrigin-RevId: 258804299
…n printing node names (object-identity was left over from when regular objects were keys)

Object-identity dictionaries and Python integers don't mix except for the few that get interned.

PiperOrigin-RevId: 258805861
PiperOrigin-RevId: 258805890
PiperOrigin-RevId: 258811490
…upstream_rocm_update_190711

PiperOrigin-RevId: 258814126
…upstream_skip_double_dtyp_subtests

PiperOrigin-RevId: 258814196
Previously, mismatches in opcode or number of operands wasn't very informative
because the error message didn't print out the HloInstruction string. For
example, an opcode mismatch might have looked like previously:

Value of: body_data_add
Expected: subtract
  Actual: 0x7f58fe3e8c00 (of type xla::HloInstruction*)

With this CL, it now looks like:

Value of: body_data_add
Expected: subtract
  Actual: 0x7efefd68ec00 (of type xla::HloInstruction*), (%add.1 = f32[2,3]{1,0:S(1)} add(f32[2,3]{1,0} %get-tuple-element.2, f32[2,3]{1,0} %constant.2))
PiperOrigin-RevId: 258814523
We expect a merge to receive a single backedge (multiple NextIteration
nodes feeding into the same merge is unexpected here).

PiperOrigin-RevId: 258818847
Roman Dubtsov added 3 commits August 15, 2019 10:13
The .bazelrc updated:
- TF threading is now default for MKL-DNN
- linking with binary MKL is now disabled by default

Limitations:
- only a single session without inter-op parallelism is supported
- XLA not covered
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.