You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Embedding models typically have smaller context windows than LLMs, which can limit the quality of embeddings generated for large contexts. This discrepancy can result in incomplete or less meaningful embeddings for the context used in devcontainer.json generation.
Describe the solution you'd like
Propose novel way of handling this challange.
Ensure that the primary context used by the LLM doesn't lose its semantic meaning when split or summarized for embedding purposes.
Adjust the logic dynamically based on the maximum token limits of the LLM and embedding models specified in the .env file.
Describe alternatives you've considered
Using the same context for both LLM and embeddings, which is not feasible due to the context length differences.
Manually tweaking the context content, which is not scalable or efficient.
Additional context
Consider edge cases where critical files like README.md might need special handling to ensure they are adequately represented in both the LLM and embedding contexts. Thorough testing is essential to ensure the adjustments maintain the semantic integrity and utility of the generated devcontainer.json configurations.
The text was updated successfully, but these errors were encountered:
If no one is assigned to the issue, feel free to tackle it, without confirmation from us, after registering your attempt. In the event that multiple PRs are made from different people, we will generally accept those with the cleanest code.
Please respect others by working on PRs that you are allowed to submit attempts to.
e.g. If you reached the limit of active attempts, please wait for the ability to do so before submitting a new PR.
If you can not submit an attempt, you will not receive your payout.
Thank you for contributing to daytonaio/devcontainer-generator!
Is your feature request related to a problem? Please describe.
Embedding models typically have smaller context windows than LLMs, which can limit the quality of embeddings generated for large contexts. This discrepancy can result in incomplete or less meaningful embeddings for the context used in
devcontainer.json
generation.Describe the solution you'd like
.env
file.Describe alternatives you've considered
Additional context
Consider edge cases where critical files like
README.md
might need special handling to ensure they are adequately represented in both the LLM and embedding contexts. Thorough testing is essential to ensure the adjustments maintain the semantic integrity and utility of the generateddevcontainer.json
configurations.The text was updated successfully, but these errors were encountered: