Skip to content

Conversation

justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Sep 3, 2024

@codecov
Copy link

codecov bot commented Sep 3, 2024

Codecov Report

Attention: Patch coverage is 10.81081% with 33 lines in your changes missing coverage. Please review.

Project coverage is 73.63%. Comparing base (84dfcad) to head (26779c9).
Report is 95 commits behind head on main.

Files with missing lines Patch % Lines
onnxscript/function_libs/torch_lib/ops/fft.py 10.81% 33 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1844      +/-   ##
==========================================
- Coverage   73.77%   73.63%   -0.14%     
==========================================
  Files         225      225              
  Lines       29333    29343      +10     
  Branches     3467     3470       +3     
==========================================
- Hits        21639    21606      -33     
- Misses       6560     6601      +41     
- Partials     1134     1136       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@justinchuby
Copy link
Collaborator Author

@justinchuby
Copy link
Collaborator Author

@justinchuby
Copy link
Collaborator Author

Just specify dft_length

@@ -1,7 +1,5 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Copyright (c) Microsoft Corporation.

Check warning

Code scanning / lintrunner

RUFF-FORMAT/format Warning

Run lintrunner -a to apply this patch.
@@ -1,7 +1,5 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Copyright (c) Microsoft Corporation.

Check warning

Code scanning / lintrunner

RUFF/format Warning

Run lintrunner -a to apply this patch.
from __future__ import annotations

from typing import Optional, Sequence
from typing import Literal, Optional, Sequence

Check warning

Code scanning / lintrunner

PYLINT/W0611 Warning

Unused Literal imported from typing (unused-import)
See unused-import. To disable, use # pylint: disable=unused-import
from __future__ import annotations

from typing import Optional, Sequence
from typing import Literal, Optional, Sequence

Check warning

Code scanning / lintrunner

RUFF/F401 Warning

typing.Literal imported but unused.
See https://docs.astral.sh/ruff/rules/unused-import
def _fftn_ortho_normalization(
self: TFloat,
dims: Sequence[int],
forward: bool,

Check warning

Code scanning / lintrunner

PYLINT/W0613 Warning

Unused argument 'forward' (unused-argument)
See unused-argument. To disable, use # pylint: disable=unused-argument
result = transformed
transformed = self

signal_size = _compute_signal_size(self, dims, last_dim_size)

Check warning

Code scanning / lintrunner

PYLINT/W0612 Warning

Unused variable 'signal_size' (unused-variable)
See unused-variable. To disable, use # pylint: disable=unused-variable
# If normalization is 1/n and we are in backward mode, we use the inverse
# mode in ONNX to get the 1/n normalization.
inverse = normalization == 2 and not forward
ortho = normalization == 1

Check warning

Code scanning / lintrunner

RUFF/F841 Warning

Local variable ortho is assigned to but never used.
See https://docs.astral.sh/ruff/rules/unused-variable

# Remove the batch dimension
transformed = op.Squeeze(transformed, axes=[0])
normalized = _fftn_onnx_normalization(

Check failure

Code scanning / lintrunner

PYLINT/E0602 Error

Undefined variable '_fftn_onnx_normalization' (undefined-variable)
See undefined-variable. To disable, use # pylint: disable=undefined-variable

# Remove the batch dimension
transformed = op.Squeeze(transformed, axes=[0])
normalized = _fftn_onnx_normalization(

Check failure

Code scanning / lintrunner

RUFF/F821 Error

Undefined name \_fftn\_onnx\_normalization.
See https://docs.astral.sh/ruff/rules/undefined-name
# Thus dim=-1 in PyTorch is dim=-2 in ONNX.
dim = [(d - 1) + self_rank if d < 0 else d for d in dim]
transformed = _fftn_onnx(self, dim, normalization, inverse=True, onesided=False)
transformed = _fftn_onnx(

Check failure

Code scanning / lintrunner

PYLINT/E1123 Error

Unexpected keyword argument 'inverse' in function call (unexpected-keyword-arg)
See unexpected-keyword-arg. To disable, use # pylint: disable=unexpected-keyword-arg
# Thus dim=-1 in PyTorch is dim=-2 in ONNX.
dim = [(d - 1) + self_rank if d < 0 else d for d in dim]
transformed = _fftn_onnx(self, dim, normalization, inverse=True, onesided=False)
transformed = _fftn_onnx(

Check failure

Code scanning / lintrunner

PYLINT/E1120 Error

No value for argument 'forward' in function call (no-value-for-parameter)
See no-value-for-parameter. To disable, use # pylint: disable=no-value-for-parameter
@justinchuby justinchuby marked this pull request as ready for review January 27, 2025 20:20
@justinchuby justinchuby added the module: torchlib Related to the torch/aten function lib in development label Jan 27, 2025
from __future__ import annotations

from typing import Optional, Sequence
from typing import Literal, Optional, Sequence

Check notice

Code scanning / CodeQL

Unused import Note

Import of 'Literal' is not used.
result = transformed
transformed = self

signal_size = _compute_signal_size(self, dims, last_dim_size)

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable signal_size is not used.

# Torch computes one-sided FFT on the last dimension only.
if onesided:
transformed = op.DFT(transformed, axis=dims[-1], onesided=True)

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable transformed is not used.
transformed = op.DFT(transformed, axis=dims[-1], onesided=True)
# TODO: Update signal_size for one-sided FFT
elif last_dim_size is not None:
transformed = op.DFT(

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable transformed is not used.
result = transformed
else:
result = op.Mul(transformed, total_sample_count)
transformed = op.DFT(transformed, axis=dims[-1], onesided=False)

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable transformed is not used.
# If normalization is 1/n and we are in backward mode, we use the inverse
# mode in ONNX to get the 1/n normalization.
inverse = normalization == 2 and not forward
ortho = normalization == 1

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable ortho is not used.
Comment on lines +172 to +174
transformed = _fftn_onnx(
self, dim, normalization, inverse=True, onesided=False, last_dim_size=last_dim_size
)

Check failure

Code scanning / CodeQL

Wrong name for an argument in a call Error

Keyword argument 'inverse' is not a supported parameter name of
function _fftn_onnx
.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module: torchlib Related to the torch/aten function lib in development

Projects

2 participants