Skip to content

Commit

Permalink
docs: Fix a few typos (pytorch#81435)
Browse files Browse the repository at this point in the history
There are small typos in:
- caffe2/python/recurrent.py
- test/distributed/test_c10d_nccl.py
- test/test_fx.py
- torch/csrc/jit/runtime/autodiff.cpp
- torchgen/gen.py

Fixes:
- Should read `propagation` rather than `propogation`.
- Should read `multiplied` rather than `multuplied`.
- Should read `eliminate` rather than `elminate`.
- Should read `dispatcher` rather than `disaptcher`.

Semi-automated pull request generated by
https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md
Pull Request resolved: pytorch#81435
Approved by: https://github.com/ngimel
  • Loading branch information
timgates42 authored and pytorchmergebot committed Jul 14, 2022
1 parent da247ea commit 3a87b47
Show file tree
Hide file tree
Showing 5 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion caffe2/python/recurrent.py
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,7 @@ def map_to_dual_list(m):
cell_net.Proto().type = 'simple'

# The last output is a list of step workspaces,
# which is only needed internally for gradient propogation
# which is only needed internally for gradient propagation
return results[:-1]


Expand Down
2 changes: 1 addition & 1 deletion test/distributed/test_c10d_nccl.py
Original file line number Diff line number Diff line change
Expand Up @@ -2183,7 +2183,7 @@ def div(fut):
process_group, allreduce_with_then_hook
)

# check whether the grads are equal to what allreduce returns multuplied by 5.
# check whether the grads are equal to what allreduce returns multiplied by 5.
# without the comm_hook, result would be still 0.25 * torch.ones(2, 2).
self._run_and_verify_hook(gpu_model, 8, 1.25 * torch.ones(2, 2))

Expand Down
2 changes: 1 addition & 1 deletion test/test_fx.py
Original file line number Diff line number Diff line change
Expand Up @@ -1550,7 +1550,7 @@ def forward(self, x):
self.assertEqual(opcodes, set(['placeholder', 'get_attr', 'call_function', 'call_method',
'call_module', 'output']))

# Test shape propogation and make sure results match actual
# Test shape propagation and make sure results match actual
self.assertEqual(output_shape, ref_out.shape)
self.assertEqual(output_stride, ref_out.stride())

Expand Down
2 changes: 1 addition & 1 deletion torch/csrc/jit/runtime/autodiff.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -389,7 +389,7 @@ bool outputRequiresGrad(Value* output) {
static ReverseDetails addReverseInline(Gradient& grad_desc) {
auto& graph = *grad_desc.f;
// note: reverse_node is intentionally not inserted to avoid
// accidentally acting on it (e.g. in elminate dead code),
// accidentally acting on it (e.g. in eliminate dead code),
// std::cout << *reverse_node << to view its state.
auto reverse_node = graph.create(prim::Reverse, 0);
auto reverse_block = reverse_node->addBlock();
Expand Down
2 changes: 1 addition & 1 deletion torchgen/gen.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@
# - 'api' has conversions for how to translate JIT schema into
# the various C++ APIs that the codegen interacts with. There
# are in fact THREE different C++ APIs: the public C++ API,
# the dispatcher API, and the legacy disaptcher API. See each
# the dispatcher API, and the legacy dispatcher API. See each
# of these respective files for more information

# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #
Expand Down

0 comments on commit 3a87b47

Please sign in to comment.