You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have some questions about FinGPT-RAG.
You mentioned in your paper that you have fine-tuned the LlaMa-7B via instuction datasets and got good results compared to other LLMs, and also mentioned that adding RAG module got good results.
But what I've seen so far is that you seem to be just open sourcing the code framework, not the fine-tuned FinGPT with RAG that can be used directly.
Will you open source the fine-tuned FinGPT with RAG later?
And if I don't have enough computing resources to perform instruction fine-tuning on Llama-7B, can I only use RAG method to enhance the LLM's ability in financial sentiment analysis?
The text was updated successfully, but these errors were encountered:
Hello, I have some questions about FinGPT-RAG.
You mentioned in your paper that you have fine-tuned the LlaMa-7B via instuction datasets and got good results compared to other LLMs, and also mentioned that adding RAG module got good results.
But what I've seen so far is that you seem to be just open sourcing the code framework, not the fine-tuned FinGPT with RAG that can be used directly.
Will you open source the fine-tuned FinGPT with RAG later?
And if I don't have enough computing resources to perform instruction fine-tuning on Llama-7B, can I only use RAG method to enhance the LLM's ability in financial sentiment analysis?
The text was updated successfully, but these errors were encountered: