Releases: matatonic/openedai-vision
Releases · matatonic/openedai-vision
0.16.1
Recent updates
Version 0.16.1
- Add "start with" parameter to pre-fill assistant response & backend support (doesn't work with all models) - aka 'Sure,' support.
0.16.0
Recent updates
Version 0.16.0
- new model support: microsoft/Phi-3-vision-128k-instruct
0.15.1
Recent updates
Version 0.15.1
- new model support: OpenGVLab/Mini-InternVL-Chat-2B-V1-5
0.15.0
Recent updates
Version 0.15.0
- new model support: cogvlm2-llama3-chinese-chat-19B, cogvlm2-llama3-chat-19B
0.14.0
Recent updates
Version: 0.14.0
- docker-compose.yml: Assume the runtime supports the device (ie. nvidia)
- new model support: qihoo360/360VL-8B, qihoo360/360VL-70B (70B loading error, see note, also too large for me to test because 4bit & 8bit are also not working for me - hopefully a quantized model comes out soon)
- new model support: BAAI/Emu2-Chat, Can be slow to load, may need --max-memory option control the loading on multiple gpus
- new model support: TIGER-Labs/Mantis: Mantis-8B-siglip-llama3, Mantis-8B-clip-llama3, Mantis-8B-Fuyu
0.13.0
Recent updates
Version: 0.13.0
- new model support: InternLM-XComposer2-4KHD
- new model support: BAAI/Bunny-Llama-3-8B-V
- new model support: qresearch/llama-3-vision-alpha-hf
0.12.1
Recent updates
Version: 0.12.1
- Fix: data: urls, revert load_image change
0.12.0
Recent updates
Version: 0.12.0
- new model support: HuggingFaceM4/idefics2-8b, HuggingFaceM4/idefics2-8b-AWQ
- Fix: remove prompt from output of InternVL-Chat-V1-5
0.11.0
Recent updates
Version: 0.11.0
- new model support: OpenGVLab/InternVL-Chat-V1-5, up to 4k resolution, top opensource model
- MiniGemini renamed MGM upstream
0.10.0
Changes
- new model support: adept/fuyu-8b