Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encounter errors when training on Mac #1237

Open
2 tasks done
carlos-spa opened this issue Oct 24, 2024 · 1 comment
Open
2 tasks done

Encounter errors when training on Mac #1237

carlos-spa opened this issue Oct 24, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@carlos-spa
Copy link

Describe the bug

Throw out an error after running svc train:

RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

To Reproduce

Run:

svc pre-split  
svc pre-resample  
svc pre-config -t so-vits-svc-4.0v1  
svc pre-hubert  
svc train

Additional context

Full output after running svc train:

[16:15:56] INFO     [16:15:56] Using strategy: auto                                                        train.py:98
INFO: GPU available: True (mps), used: True
           INFO     [16:15:56] GPU available: True (mps), used: True                                   rank_zero.py:63
INFO: TPU available: False, using: 0 TPU cores
           INFO     [16:15:56] TPU available: False, using: 0 TPU cores                                rank_zero.py:63
INFO: HPU available: False, using: 0 HPUs
           INFO     [16:15:56] HPU available: False, using: 0 HPUs                                     rank_zero.py:63
           WARNING  [16:15:56]                                                                         warnings.py:109
                    /opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-pa
                    ckages/so_vits_svc_fork/modules/synthesizers.py:81: UserWarning: Unused arguments:
                    {'n_layers_q': 3, 'use_spectral_norm': False, 'pretrained': {'D_0.pth':
                    'https://huggingface.co/datasets/ms903/sovits4.0-768vec-layer12/resolve/main/sovit
                    s_768l12_pre_large_320k/clean_D_320000.pth', 'G_0.pth':
                    'https://huggingface.co/datasets/ms903/sovits4.0-768vec-layer12/resolve/main/sovit
                    s_768l12_pre_large_320k/clean_G_320000.pth'}}
                      warnings.warn(f"Unused arguments: {kwargs}")

           INFO     [16:15:56] Decoder type: hifi-gan                                              synthesizers.py:100
           WARNING  [16:15:56]                                                                         warnings.py:109
                    /opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-pa
                    ckages/torch/nn/utils/weight_norm.py:143: FutureWarning:
                    `torch.nn.utils.weight_norm` is deprecated in favor of
                    `torch.nn.utils.parametrizations.weight_norm`.
                      WeightNorm.apply(module, name, dim)

           WARNING  [16:15:56]                                                                         warnings.py:109
                    /opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-pa
                    ckages/so_vits_svc_fork/utils.py:246: UserWarning: Keys not found in checkpoint
                    state dict:['emb_g.weight']
                      warnings.warn(f"Keys not found in checkpoint state dict:" f"{not_in_from}")

           WARNING  [16:15:56]                                                                         warnings.py:109
                    /opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-pa
                    ckages/so_vits_svc_fork/utils.py:264: UserWarning: Shape mismatch:
                    ['dec.cond.weight: torch.Size([512, 256, 1]) -> torch.Size([512, 768, 1])',
                    'enc_q.enc.cond_layer.weight_v: torch.Size([6144, 256, 1]) -> torch.Size([6144,
                    768, 1])', 'flow.flows.0.enc.cond_layer.weight_v: torch.Size([1536, 256, 1]) ->
                    torch.Size([1536, 768, 1])', 'flow.flows.2.enc.cond_layer.weight_v:
                    torch.Size([1536, 256, 1]) -> torch.Size([1536, 768, 1])',
                    'flow.flows.4.enc.cond_layer.weight_v: torch.Size([1536, 256, 1]) ->
                    torch.Size([1536, 768, 1])', 'flow.flows.6.enc.cond_layer.weight_v:
                    torch.Size([1536, 256, 1]) -> torch.Size([1536, 768, 1])',
                    'f0_decoder.cond.weight: torch.Size([192, 256, 1]) -> torch.Size([192, 768, 1])']
                      warnings.warn(

           INFO     [16:15:56] Loaded checkpoint 'logs/44k/G_0.pth' (epoch 0)                             utils.py:307
           INFO     [16:15:56] Loaded checkpoint 'logs/44k/D_0.pth' (epoch 0)                             utils.py:307
┏━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┓
┃   ┃ Name  ┃ Type                     ┃ Params ┃ Mode  ┃
┡━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━┩
│ 0 │ net_g │ SynthesizerTrn           │ 45.6 M │ train │
│ 1 │ net_d │ MultiPeriodDiscriminator │ 46.7 M │ train │
└───┴───────┴──────────────────────────┴────────┴───────┘
Trainable params: 92.4 M
Non-trainable params: 0
Total params: 92.4 M
Total estimated model params size (MB): 369
Modules in train mode: 486
Modules in eval mode: 0
[16:15:57] WARNING  [16:15:57]                                                                         warnings.py:109
                    /opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-pa
                    ckages/lightning/pytorch/trainer/connectors/data_connector.py:424: The
                    'val_dataloader' does not have many workers which may be a bottleneck. Consider
                    increasing the value of the `num_workers` argument` to `num_workers=9` in the
                    `DataLoader` to improve performance.

           INFO     [16:15:57] Setting current epoch to 0                                                 train.py:311
           INFO     [16:15:57] Setting total batch idx to 0                                               train.py:327
           INFO     [16:15:57] Setting global step to 0                                                   train.py:317
           WARNING  [16:15:57] Using TPU/MPS. Patching torch.stft to use cpu.                             train.py:200
[16:16:00] WARNING  [16:16:00]                                                                         warnings.py:109
                    /opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-pa
                    ckages/torch/nn/functional.py:5096: UserWarning: MPS: The constant padding of more
                    than 3 dimensions is not currently supported natively. It uses View Ops default
                    implementation to run. This may have performance implications. (Triggered
                    internally at
                    /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/Pad
                    .mm:465.)
                      return torch._C._nn.pad(input, pad, mode, value)

[16:16:02] WARNING  [16:16:02]                                                                         warnings.py:109
                    /opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-pa
                    ckages/torch/functional.py:704: UserWarning: stft with return_complex=False is
                    deprecated. In a future pytorch release, stft will return complex tensors for all
                    inputs, and return_complex=False will raise an error.
                    Note: you can still call torch.view_as_real on the complex output to recover the
                    old return format. (Triggered internally at
                    /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/SpectralOps.cpp:87
                    8.)
                      return _VF.stft(  # type: ignore[attr-defined]

           WARNING  [16:16:02]                                                                         warnings.py:109
                    /opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-pa
                    ckages/so_vits_svc_fork/train.py:447: FutureWarning:
                    `torch.cuda.amp.autocast(args...)` is deprecated. Please use
                    `torch.amp.autocast('cuda', args...)` instead.
                      with autocast(enabled=False):

Epoch 0/9999 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/92 0:00:06 • -:--:-- 0.00it/s v_num: 0.000
Traceback (most recent call last):
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/bin/svc", line 8, in <module>
    sys.exit(cli())
             ^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/so_vits_svc_fork/__main__.py", line 128, in train
    train(
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/so_vits_svc_fork/train.py", line 149, in train
    trainer.fit(model, datamodule=datamodule)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 538, in fit
    call._call_and_handle_interrupt(
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/trainer/call.py", line 47, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 574, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 981, in _run
    results = self._run_stage()
              ^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 1025, in _run_stage
    self.fit_loop.run()
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/loops/fit_loop.py", line 205, in run
    self.advance()
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/loops/fit_loop.py", line 363, in advance
    self.epoch_loop.run(self._data_fetcher)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 140, in run
    self.advance(data_fetcher)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 252, in advance
    batch_output = self.manual_optimization.run(kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/loops/optimization/manual.py", line 94, in run
    self.advance(kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/loops/optimization/manual.py", line 114, in advance
    training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values())
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/trainer/call.py", line 319, in _call_strategy_hook
    output = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/strategies/strategy.py", line 390, in training_step
    return self.lightning_module.training_step(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/so_vits_svc_fork/train.py", line 508, in training_step
    self.manual_backward(loss_gen_all / accumulate_grad_batches)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/core/module.py", line 1082, in manual_backward
    self.trainer.strategy.backward(loss, None, *args, **kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/strategies/strategy.py", line 212, in backward
    self.precision_plugin.backward(closure_loss, self.lightning_module, optimizer, *args, **kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/plugins/precision/precision.py", line 72, in backward
    model.backward(tensor, *args, **kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/lightning/pytorch/core/module.py", line 1101, in backward
    loss.backward(*args, **kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/torch/_tensor.py", line 581, in backward
    torch.autograd.backward(
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward
    _engine_run_backward(
  File "/opt/homebrew/Caskroom/miniconda/base/envs/so-vits-svc-fork/lib/python3.11/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

Version

4.2.26

Platform

macOS 15.0.1

Code of Conduct

  • I agree to follow this project's Code of Conduct.

No Duplicate

  • I have checked existing issues to avoid duplicates.
@carlos-spa carlos-spa added the bug Something isn't working label Oct 24, 2024
@Bkmillanzi
Copy link

me too, just bought a Macbook hoping to get a better training output

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants