-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request for MLC Version Update and NVILA Support in nano_llm Model #61
Comments
Hi !, yes, I am about halfway through updating MLC to the latest haha. I hope for it to be ready by Jan 1. I had meant to release the container along with Orin Nano Super, but hit a couple build issues and had to circle back.
I am also updating jetson-ai-lab.com with a Javascript 'model configurator' that will give you the right 'docker run' or docker-compose to spin up local OpenAI-protocol server (these are now provided by MLC, TRT-LLM, vLLM, llama.cpp, ollama)
That approach is more scalable going forward than me being able to keep NanoLLM updated (also at that time there hadn't been the OpenAI servers , and they haven't supported img2txt Completion for VLMs)
For NVILA, that is supposed to be available through vLLM, and was hoping to also rebuild that container. Will work through these! Ultimately this will keep them updated easier 👍
…________________________________
From: JIA-HONG-CHU ***@***.***>
Sent: Thursday, December 26, 2024 2:27:02 AM
To: dusty-nv/NanoLLM ***@***.***>
Cc: Subscribed ***@***.***>
Subject: [dusty-nv/NanoLLM] Request for MLC Version Update and NVILA Support in nano_llm Model (Issue #61)
Hi,
Thank you for your amazing work on this project! I have two requests:
Could you please update the MLC version to the latest release? This would help us keep our setup up to date.
I’m also using the nano_llm model and was wondering if there are any plans to add support for NVILA. This would be very useful for applications that require both visual and linguistic integration.
If these updates are not currently planned, any guidance on how we might implement them ourselves would be greatly appreciated.
Thank you again for your time and support!
—
Reply to this email directly, view it on GitHub<#61>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADVEGK5B63CKEQXYP5CZX4T2HOVUNAVCNFSM6AAAAABUG23VBWVHI2DSMVQWIX3LMV43ASLTON2WKOZSG42TSMZZHE2DSNQ>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
Thank you again for your previous updates and for all the effort you’ve been putting into this project! 😊 I was wondering if there’s been any progress on updating. I completely understand that these tasks can take time, especially with build challenges, but I thought I’d check in to see if there are any updates or an estimated timeline for the release. Thank you once again for your hard work—looking forward to hearing from you! |
Hi,
Thank you for your amazing work on this project! I have two requests:
Could you please update the MLC version to the latest release? This would help us keep our setup up to date.
I’m also using the nano_llm model and was wondering if there are any plans to add support for NVILA. This would be very useful for applications that require both visual and linguistic integration.
If these updates are not currently planned, any guidance on how we might implement them ourselves would be greatly appreciated.
Thank you again for your time and support!
The text was updated successfully, but these errors were encountered: