Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

构建镜像时报错 #11

Open
dwow100 opened this issue Apr 13, 2023 · 2 comments
Open

构建镜像时报错 #11

dwow100 opened this issue Apr 13, 2023 · 2 comments

Comments

@dwow100
Copy link

dwow100 commented Apr 13, 2023

docker build -t soulteary/prompt-generator:base . -f docker/Dockerfile.base

Error response from daemon: Dockerfile parse error line 11: FROM requires either one or three arguments

@cnhuz
Copy link

cnhuz commented Apr 13, 2023

我在windows下构建也遇到了,把Dockerfile.base中这段代码


RUN cat > /get-models.py <<EOF
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
AutoModelForSeq2SeqLM.from_pretrained('Helsinki-NLP/opus-mt-zh-en')
AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-zh-en')
pipeline('text-generation', model='succinctly/text2image-prompt-generator')
EOF

改为


RUN echo "from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline" > /get-models.py && \
    echo "AutoModelForSeq2SeqLM.from_pretrained('Helsinki-NLP/opus-mt-zh-en')" >> /get-models.py && \
    echo "AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-zh-en')" >> /get-models.py && \
    echo "pipeline('text-generation', model='succinctly/text2image-prompt-generator')" >> /get-models.py

就可以了

@cnhuz cnhuz mentioned this issue Apr 13, 2023
@baymax55
Copy link

构建Dockerfile.gpu出错时,将下列代码进行替换:
RUN cat > /get-models.py <<EOF from clip_interrogator import Config, Interrogator import torch config = Config() config.device = 'cuda' if torch.cuda.is_available() else 'cpu' config.blip_offload = False if torch.cuda.is_available() else True config.chunk_size = 2048 config.flavor_intermediate_count = 512 config.blip_num_beams = 64 config.clip_model_name = "ViT-H-14/laion2b_s32b_b79k" ci = Interrogator(config) EOF

->

RUN echo "from clip_interrogator import Config, Interrogator" >> /get-models.py && \ echo "import torch" >> /get-models.py && \ echo "config = Config()" >> /get-models.py && \ echo "config.device = 'cuda' if torch.cuda.is_available() else 'cpu'" >> /get-models.py && \ echo "config.blip_offload = False if torch.cuda.is_available() else True" >> /get-models.py && \ echo "config.chunk_size = 2048" >> /get-models.py && \ echo "config.flavor_intermediate_count = 512" >> /get-models.py && \ echo "config.blip_num_beams = 64" >> get-models.py && \ echo "config.clip_model_name = \"ViT-H-14/laion2b_s32b_b79k\"" >> /get-models.py && \ echo "ci = Interrogator(config)" >> /get-models.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants