You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I thought I would try the Banana extension repo today. I have tried changing the ENV variable model_id to a few different models, but I keep getting this error:
'message': '', 'created': 1676709740, 'apiVersion': 'January 11, 2023', 'modelOutputs': [{'$error': {'code': 'APP_INFERENCE_ERROR', 'name': 'OSError', 'message': 'stabilityai/stable-diffusion-2-1-base does not appear to have a file named model_index.json.', 'stack': 'Traceback (most recent call last):\n File "/api/diffusers/src/diffusers/configuration_utils.py", line 326, in load_config\n config_file = hf_hub_download(\n File "/opt/conda/envs/xformers/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn\n return fn(*args, **kwargs)\n File "/opt/conda/envs/xformers/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1205, in hf_hub_download\n raise LocalEntryNotFoundError(\nhuggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set \'local_files_only\' to False.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/api/server.py", line 39, in inference\n output = user_src.inference(model_inputs)\n File "/api/app.py", line 227, in inference\n pipeline = getPipelineForModel(pipeline_name, model, normalized_model_id)\n File "/api/getPipeline.py", line 83, in getPipelineForModel\n pipeline = DiffusionPipeline.from_pretrained(\n File "/api/diffusers/src/diffusers/pipelines/pipeline_utils.py", line 462, in from_pretrained\n config_dict = cls.load_config(\n File "/api/diffusers/src/diffusers/configuration_utils.py", line 354, in load_config\n raise EnvironmentError(\nOSError: stabilityai/stable-diffusion-2-1-base does not appear to have a file named model_index.json.\n'}}]}
The text was updated successfully, but these errors were encountered:
Can you give me a bit more info here?
It looks like there's some issue during the build download stage, anything in the banana build logs?
I cloned and tested https://github.com/kiri-art/docker-diffusers-api-build-download as is, (i.e. no changes), and it all works:
Banana build log:
2023-02-18T11:19:14.000Z You've triggered a build + deploy on Banana. It may take ~1 hr to complete. Thanks for your patience.
Waiting for logs...
SUCCESS: Git Authorization
SUCCESS: Build Started
SUCCESS: Build Finished... Running optimizations
SUCCESS: Model Registered
Your model was updated and is now deployed!
Test:
$ $ BANANA_MODEL_KEY="XXX" python test.py txt2img --banana
Running test: txt2img
{
"modelInputs": {
"prompt": "realistic field of grass",
"num_inference_steps": 20
},
"callInputs": {}
}
# [... snipped interim calls from long load after first deploy...]
Request took 341.8s (init: 5.2s, inference: 3.6s)
Saved ./tests/output/txt2img.png
{
"$meta": {
"MODEL_ID": "stabilityai/stable-diffusion-2-1-base",
"PIPELINE": "StableDiffusionPipeline",
"SCHEDULER": "DPMSolverMultistepScheduler"
},
"image_base64": "[512x512 PNG image, 493.5KiB bytes]",
"$timings": {
"init": 5168,
"inference": 3615
},
"$mem_usage": 0.7632296154709317
}
Second call (cold start):
Request took 19.2s (init: 3.2s, inference: 3.2s)
Saved ./tests/output/txt2img.png
Hey,
I thought I would try the Banana extension repo today. I have tried changing the ENV variable model_id to a few different models, but I keep getting this error:
'message': '', 'created': 1676709740, 'apiVersion': 'January 11, 2023', 'modelOutputs': [{'$error': {'code': 'APP_INFERENCE_ERROR', 'name': 'OSError', 'message': 'stabilityai/stable-diffusion-2-1-base does not appear to have a file named model_index.json.', 'stack': 'Traceback (most recent call last):\n File "/api/diffusers/src/diffusers/configuration_utils.py", line 326, in load_config\n config_file = hf_hub_download(\n File "/opt/conda/envs/xformers/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn\n return fn(*args, **kwargs)\n File "/opt/conda/envs/xformers/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1205, in hf_hub_download\n raise LocalEntryNotFoundError(\nhuggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set \'local_files_only\' to False.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/api/server.py", line 39, in inference\n output = user_src.inference(model_inputs)\n File "/api/app.py", line 227, in inference\n pipeline = getPipelineForModel(pipeline_name, model, normalized_model_id)\n File "/api/getPipeline.py", line 83, in getPipelineForModel\n pipeline = DiffusionPipeline.from_pretrained(\n File "/api/diffusers/src/diffusers/pipelines/pipeline_utils.py", line 462, in from_pretrained\n config_dict = cls.load_config(\n File "/api/diffusers/src/diffusers/configuration_utils.py", line 354, in load_config\n raise EnvironmentError(\nOSError: stabilityai/stable-diffusion-2-1-base does not appear to have a file named model_index.json.\n'}}]}
The text was updated successfully, but these errors were encountered: