This repository has been archived by the owner on Jan 3, 2023. It is now read-only.
forked from tensorflow/tensorflow
-
Notifications
You must be signed in to change notification settings - Fork 1
WIP: TF threading #5
Open
rsdubtso
wants to merge
10,000
commits into
NervanaSystems:master
Choose a base branch
from
rsdubtso:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
rsdubtso
force-pushed
the
master
branch
2 times, most recently
from
January 28, 2019 19:59
b9fd9e4
to
648cbf4
Compare
rsdubtso
force-pushed
the
master
branch
2 times, most recently
from
May 31, 2019 22:39
6338af3
to
a9b0a6a
Compare
…cLayoutOptimizer. PiperOrigin-RevId: 258198838
PiperOrigin-RevId: 258198866
PiperOrigin-RevId: 258199464
PiperOrigin-RevId: 258199946
Sum_{i=1,...,k} Precision@i(e) * I(i, e) / D(k, e) where I(i, e) is 1 if label of rank i for e is a positive label for e, and 0 otherwise, and the denominator D(k, e) is the maximum possible number of terms in the sum with a non-zero I(i, e): D(k, e) = min(k, number positive labels for e). While this formula is implemented correctly in `average_precision_at_k` when `labels` is a sparse tensor, it is incorrect when `labels` is a dense tensor with some values out of range [0, num_classes), necessary in the case of examples with a different number of positive labels. In this case, the current implementation defines the denominator as D(k, e) = min(k, size of last dimension of `labels`). Consider 2 examples e_1 and e_2 with positive labels {0, 1} and {0}, assume that the top ranked label for e_2 is 0. AveragePrecision@2(e_2) = 1 and is computed correctly when the labels are given by a sparse tensor, but when the labels are given by a dense tensor, e.g. [[0, 1], [0, -1]], then AveragePrecision@2(e_2) = 0.5, as D(i, e_2) is considered min(2, 2). This CL corrects this error. PiperOrigin-RevId: 258200333
…rces. PiperOrigin-RevId: 258201156
…n data edge" for If/While nodes as well. PiperOrigin-RevId: 258205803
This change also adds a test that covers the error, i.e. attempting to pass in tensors with different shapes across runs to a particular instance of a collective gather op. PiperOrigin-RevId: 258207891
PiperOrigin-RevId: 258208153
…h op for rebatching. PiperOrigin-RevId: 258211187
PiperOrigin-RevId: 258216008
…en loading, and vice versa. Support for this was (perhaps unintentially) added in cl/242345741. PiperOrigin-RevId: 258218701
PiperOrigin-RevId: 258222061
PiperOrigin-RevId: 258225869
This will make the code correctly fallback to use existing train_array or train_generator. PiperOrigin-RevId: 258227722
…ef to MLIR Otherwise, it will fail to find the input node later in InferMainFunctionType method. We should preserve input nodes even if unused so that user can feed inputs, as specified. PiperOrigin-RevId: 258230459
PiperOrigin-RevId: 258233713
This update fixes few TensorChipping and TensorSlicing regressions. PiperOrigin-RevId: 258238409
PiperOrigin-RevId: 258239245
PiperOrigin-RevId: 258242570
…tfrun_removal PiperOrigin-RevId: 258244067
…from bazelrc file They are no longer needed and will prevent future incompatible migration for RBE build. Related bazelbuild/bazel#7480 PiperOrigin-RevId: 258244467
1. Format the debug string to have newline before print out the proto debug string. 2. Change the node generation ordering for output identity op. Now it will be in an increasing order. 3. Update the typo in the test. PiperOrigin-RevId: 258787166
…requested dtype Prior to this change the conversion logic was duplicated between EagerTensor_init and ConvertToTensor in pywrap_tfe_src.cc PiperOrigin-RevId: 258787751
PiperOrigin-RevId: 258789651
PiperOrigin-RevId: 258790736
…flow cases. Earlier we tried to remove it in all cases but that doesn't work because if we create a table in a control flow setting then we can't run the initializer outside anymore. Fixes tensorflow#29872 and tensorflow#27086. PiperOrigin-RevId: 258792706
PiperOrigin-RevId: 258793545
PiperOrigin-RevId: 258793613
…re headers require it. PiperOrigin-RevId: 258793968
PiperOrigin-RevId: 258794419
…upstream_rocm_platform_fix_190717 PiperOrigin-RevId: 258800263
…ution PiperOrigin-RevId: 258802489
PiperOrigin-RevId: 258803932
They are statistical tests for very low probability regions, which is not very practically useful. PiperOrigin-RevId: 258804299
…n printing node names (object-identity was left over from when regular objects were keys) Object-identity dictionaries and Python integers don't mix except for the few that get interned. PiperOrigin-RevId: 258805861
PiperOrigin-RevId: 258805890
PiperOrigin-RevId: 258811490
PiperOrigin-RevId: 258813026
PiperOrigin-RevId: 258813889
…upstream_rocm_update_190711 PiperOrigin-RevId: 258814126
…upstream_skip_double_dtyp_subtests PiperOrigin-RevId: 258814196
Previously, mismatches in opcode or number of operands wasn't very informative because the error message didn't print out the HloInstruction string. For example, an opcode mismatch might have looked like previously: Value of: body_data_add Expected: subtract Actual: 0x7f58fe3e8c00 (of type xla::HloInstruction*) With this CL, it now looks like: Value of: body_data_add Expected: subtract Actual: 0x7efefd68ec00 (of type xla::HloInstruction*), (%add.1 = f32[2,3]{1,0:S(1)} add(f32[2,3]{1,0} %get-tuple-element.2, f32[2,3]{1,0} %constant.2)) PiperOrigin-RevId: 258814523
PiperOrigin-RevId: 258814617
We expect a merge to receive a single backedge (multiple NextIteration nodes feeding into the same merge is unexpected here). PiperOrigin-RevId: 258818847
PiperOrigin-RevId: 258819909
PiperOrigin-RevId: 258820315
The .bazelrc updated: - TF threading is now default for MKL-DNN - linking with binary MKL is now disabled by default Limitations: - only a single session without inter-op parallelism is supported - XLA not covered
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.