Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues on macOS #360

Closed
yuniit opened this issue Nov 1, 2024 · 6 comments
Closed

Issues on macOS #360

yuniit opened this issue Nov 1, 2024 · 6 comments

Comments

@yuniit
Copy link

yuniit commented Nov 1, 2024

Hi, when I try to Synthesize it shows the error below:

MacBook Pro (15-inch, 2018)
Processor: 2,2 GHz 6-Core Intel Core i7

/opt/miniconda3/envs/f5/lib/python3.10/site-packages/gradio/processing_utils.py:593: UserWarning: Trying to convert audio automatically from int8 to 16-bit int format.
  warnings.warn(warning.format(data.dtype))
Traceback (most recent call last):
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
    response = await route_utils.call_process_api(
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
    output = await app.get_blocks().process_api(
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/gradio/blocks.py", line 1935, in process_api
    result = await self.call_function(
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
    return await future
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
    result = context.run(func, *args)
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
    response = f(*args, **kwargs)
  File "/Users/rabbit/Desktop/F5-TTS/src/f5_tts/train/finetune_gradio.py", line 1211, in infer
    if not os.path.isfile(file_checkpoint):
  File "/opt/miniconda3/envs/f5/lib/python3.10/genericpath.py", line 30, in isfile
    st = os.stat(path)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
Traceback (most recent call last):
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
    response = await route_utils.call_process_api(
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
    output = await app.get_blocks().process_api(
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/gradio/blocks.py", line 1935, in process_api
    result = await self.call_function(
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
    return await future
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
    result = context.run(func, *args)
  File "/opt/miniconda3/envs/f5/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
    response = f(*args, **kwargs)
  File "/Users/rabbit/Desktop/F5-TTS/src/f5_tts/train/finetune_gradio.py", line 1211, in infer
    if not os.path.isfile(file_checkpoint):
  File "/opt/miniconda3/envs/f5/lib/python3.10/genericpath.py", line 30, in isfile
    st = os.stat(path)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
@SWivid
Copy link
Owner

SWivid commented Nov 1, 2024

File "/Users/rabbit/Desktop/F5-TTS/src/f5_tts/train/finetune_gradio.py", line 1211, in infer

Hi, maybe check the script runned.
f5-tts_infer-gradio for inference, and f5-tts_finetune-gradio for training/finetuning

@yuniit
Copy link
Author

yuniit commented Nov 2, 2024

Sorry for the late reply

I install using the following option:

1. As a pip package (if just for inference)
pip install git+https://github.com/SWivid/F5-TTS.git

and when I run f5-tts_finetune-gradio it show error:

Traceback (most recent call last):
  File "/opt/miniconda3/envs/f5-tts/bin/f5-tts_finetune-gradio", line 5, in <module>
    from f5_tts.train.finetune_gradio import main
  File "/opt/miniconda3/envs/f5-tts/lib/python3.10/site-packages/f5_tts/train/finetune_gradio.py", line 1386, in <module>
    projects, projects_selelect = get_list_projects()
  File "/opt/miniconda3/envs/f5-tts/lib/python3.10/site-packages/f5_tts/train/finetune_gradio.py", line 619, in get_list_projects
    for folder in os.listdir(path_data):
FileNotFoundError: [Errno 2] No such file or directory: '/opt/miniconda3/envs/f5-tts/lib/python3.10/site-packages/f5_tts/../../data'

@SWivid
Copy link
Owner

SWivid commented Nov 2, 2024

Hi @yuniit ,so you want to train the model rather than inference?
in that case you should do local installation which is the option 2.

if you want to do inference, the command is f5-tts_infer-gradio, not f5-tts_finetune-gradio

@yuniit yuniit closed this as completed Nov 4, 2024
@hildazzz
Copy link

Hi @yuniit ,so you want to train the model rather than inference? in that case you should do local installation which is the option 2.

if you want to do inference, the command is f5-tts_infer-gradio, not f5-tts_finetune-gradio

hi, I have installed this as according to 2, when using f5-tts_infer-gradio for inference, the following error was encountered:
TypeError: Trying to convert ComplexFloat to the MPS backend but it does not have support for that dtype..
MacBook Pro(2.6 GHz Intel Core i7). How should this error be resolved? Thanks!

@SWivid
Copy link
Owner

SWivid commented Nov 16, 2024

@hildazzz I'm not quite familiar with Mac device. Is the Intel Core i7 with AMD GPU rather than a M1?

In that case, might only support cpu inference which is relatively slow. Comment out

device = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"
if device == "mps":
os.environ["PYTOCH_ENABLE_MPS_FALLBACK"] = "1"
and add

device = "cpu"

or

  1. check for online space demo usage
  2. https://github.com/SWivid/F5-TTS#:~:text=f5%2Dtts%2Dmlx,version%20by%20DakeQQ, Export to ONNX Format #214

@hildazzz
Copy link

@hildazzz I'm not quite familiar with Mac device. Is the Intel Core i7 with AMD GPU rather than a M1?

In that case, might only support cpu inference which is relatively slow. Comment out

device = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"
if device == "mps":
os.environ["PYTOCH_ENABLE_MPS_FALLBACK"] = "1"

and add

device = "cpu"

or

  1. check for online space demo usage
  2. https://github.com/SWivid/F5-TTS#:~:text=f5%2Dtts%2Dmlx,version%20by%20DakeQQ, Export to ONNX Format #214

Thanks for your reply, problem solved!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants