-
Notifications
You must be signed in to change notification settings - Fork 185
feat: re-add ask models for simple mode #691
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
WalkthroughThis update refactors model configuration prompts and provider setup in the codebase. It removes conditional logic and parameters from provider question functions, making model and embedding model prompts unconditional. Model config creation is centralized, and provider-specific template setup functions are renamed. Type and argument changes are propagated throughout, with some template files updated or removed. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI
participant ProviderQuestion
participant ModelConfigHelper
User->>CLI: Start setup
CLI->>ProviderQuestion: Prompt for API key (if needed)
CLI->>ProviderQuestion: Prompt for LLM model
CLI->>ProviderQuestion: Prompt for embedding model
ProviderQuestion-->>CLI: Return config
CLI->>ModelConfigHelper: (Optional) getGpt41ModelConfig (for CI/simple flows)
ModelConfigHelper-->>CLI: Return model config
CLI-->>User: Complete setup with selected config
Possibly related PRs
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🔭 Outside diff range comments (11)
packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (1)
4-16
: Fail fast whenMODEL
/EMBEDDING_MODEL
env vars are missingBoth fields currently fall back to the empty string, which Ollama rejects at runtime with a vague model not found error.
Guard early so mis-configuration is detected immediately:export function initSettings() { const config = { host: process.env.OLLAMA_BASE_URL ?? "http://127.0.0.1:11434", }; + + if (!process.env.MODEL || !process.env.EMBEDDING_MODEL) { + throw new Error( + "Required env vars MODEL and/or EMBEDDING_MODEL are not set for Ollama" + ); + } + Settings.llm = new Ollama({ model: process.env.MODEL!, config, }); Settings.embedModel = new OllamaEmbedding({ model: process.env.EMBEDDING_MODEL!, config, }); }This avoids silent misconfigurations and aligns with the stricter checks added for other providers.
packages/create-llama/templates/components/providers/typescript/groq/settings.ts (1)
5-17
: Guard against missingMODEL
/EMBEDDING_MODEL
before non-null assertions
process.env.MODEL!
andembedModelMap[process.env.EMBEDDING_MODEL!]
assume the vars are always present.
If they are undefined the app starts, then explodes with an obscure error from the SDK.export function initSettings() { + if (!process.env.MODEL || !process.env.EMBEDDING_MODEL) { + throw new Error( + "MODEL and EMBEDDING_MODEL must be set before initialising Groq provider" + ); + } const embedModelMap: Record<string, string> = { "all-MiniLM-L6-v2": "Xenova/all-MiniLM-L6-v2", "all-mpnet-base-v2": "Xenova/all-mpnet-base-v2", }; @@ Settings.embedModel = new HuggingFaceEmbedding({ modelType: embedModelMap[process.env.EMBEDDING_MODEL!], }); }This keeps the failure surface small and messages clear.
packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)
8-18
: Non-null assertions mask config errors
process.env.MODEL!
andprocess.env.EMBEDDING_MODEL!
are asserted non-null, yet nothing ensures they are.
Prefer explicit validation to prevent runtime surprises:export function initSettings() { + if (!process.env.MODEL || !process.env.EMBEDDING_MODEL) { + throw new Error( + "Anthropic provider requires MODEL and EMBEDDING_MODEL env vars" + ); + } const embedModelMap: Record<string, string> = {Also consider lifting
embedModelMap
to a shared util to avoid duplication across providers.packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)
9-16
: Guard against missingMODEL
/EMBEDDING_MODEL
env vars.
process.env.MODEL
(and the embedding counterpart) are blindly cast withas
.
If the variable is undefined the SDK will throw later at runtime, yet the compiler remains silent.- model: process.env.MODEL as keyof typeof ALL_AVAILABLE_MISTRAL_MODELS, + model: assertEnv("MODEL") as keyof typeof ALL_AVAILABLE_MISTRAL_MODELS,Consider a small helper:
function assertEnv(name: string): string { const v = process.env[name]; if (!v) throw new Error(`Environment variable ${name} must be defined`); return v; }packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)
9-16
: Same env-var null-safety concern as Mistral settings.Blind casts of
process.env.MODEL
/EMBEDDING_MODEL
may explode later.
Reuse the sameassertEnv
helper (or similar) to fail fast and surface configuration errors early.packages/create-llama/questions/ci.ts (1)
18-25
:async
is now redundant – drop it to simplify.
getCIQuestionResults
no longer awaits anything; returning a plain object wrapped in a resolved Promise is superfluous.-export async function getCIQuestionResults( +export function getCIQuestionResults(and adjust the return type accordingly (
QuestionResults
, notPromise<QuestionResults>
).
Less cognitive load and slightly faster execution.packages/create-llama/helpers/providers/ollama.ts (1)
60-84
:process.exit(1)
in helper breaks consumers.
ensureModel
kills the entire Node process on failure.
If this helper is ever reused in a library or inside Jest tests, it will terminate the runner unexpectedly.Bubble the error and let the caller decide:
- console.log(red(...)); - process.exit(1); + throw new Error(red(`Model ${modelName} missing. Run 'ollama pull ${modelName}'.`));packages/create-llama/helpers/providers/gemini.ts (1)
35-47
: Prompt for API key can expose secrets in shell history.Typing the key in an echoed prompt prints it back in clear text.
Usetype: "password"
so the terminal masks input.- type: "text", + type: "password",packages/create-llama/helpers/providers/groq.ts (1)
91-104
: API key can still be empty after the promptIf the user simply hits when asked for the key and no
GROQ_API_KEY
env var is set, we move on with an empty string.
getAvailableModelChoicesGroq(config.apiKey!)
then throws, but the resulting stack trace is less user-friendly than an early validation.if (!config.apiKey) { const { key } = await prompts( @@ ); - config.apiKey = key || process.env.GROQ_API_KEY; + config.apiKey = key || process.env.GROQ_API_KEY; + + if (!config.apiKey?.trim()) { + console.log( + red( + "A Groq API key is required to fetch model choices. Aborting.", + ), + ); + process.exit(1); + } }packages/create-llama/helpers/providers/azure.ts (1)
54-64
:isConfigured()
always returnsfalse
– is that intentional?For Azure the comment says the provider “can’t be fully configured”, but returning
false
irrespective of the presence ofAZURE_OPENAI_KEY
suppresses downstream checks that merely need the key (e.g., early CI validation).-isConfigured(): boolean { - // the Azure model provider can't be fully configured as endpoint and deployment names have to be configured with env variables - return false; -}, +isConfigured(): boolean { + return Boolean(config.apiKey ?? process.env.AZURE_OPENAI_KEY); +},If additional env variables are indeed mandatory, consider checking those explicitly so users get a precise error instead of a blanket “not configured”.
packages/create-llama/helpers/providers/openai.ts (1)
31-52
:config.apiKey
may beundefined
in CI →getAvailableModelChoices()
will throw
config.apiKey
is only guaranteed to be populated when
a) the environment variable is set, or
b) the interactive prompt runs.Inside CI (
isCI === true
) the prompt is skipped, so a missingOPENAI_API_KEY
leads to an undefined key that is subsequently passed togetAvailableModelChoices(...)
(line 58/70). The helper immediately throws:if (!apiKey) { throw new Error("need OpenAI key to retrieve model choices"); }→ Any CI job without the env-var will now fail even though interactive input is impossible.
31-32 if (!config.apiKey && !isCI) { +31a+ // In CI we must *fail early* with a clear message *before* hitting the remote call. +31b+ if (!config.apiKey && isCI) { +31c+ throw new Error( +31d+ "OPENAI_API_KEY is not set in the CI environment – required for model discovery", +31e+ ); +31f+ }Alternatively, short-circuit the model/embedding prompts when the key is absent in CI.
🧹 Nitpick comments (14)
packages/create-llama/helpers/models.ts (1)
9-11
:isConfigured
should rely on the object’sapiKey
, not the captured param
isConfigured
closes over theopenAiKey
argument.
If the returned config object is later mutated (config.apiKey = …
),isConfigured()
will still look at the stale captured value and give the wrong answer.- isConfigured(): boolean { - return !!openAiKey; - }, + isConfigured(): boolean { + return !!this.apiKey; + },This keeps the checker truthful and avoids surprising behaviour.
packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)
4-16
: HandleparseInt
result to avoid passingNaN
as dimensionsIf
EMBEDDING_DIM
is set but not a valid integer,parseInt
returnsNaN
, which propagates silently to the OpenAI SDK.Settings.embedModel = new OpenAIEmbedding({ model: process.env.EMBEDDING_MODEL, - dimensions: process.env.EMBEDDING_DIM - ? parseInt(process.env.EMBEDDING_DIM) - : undefined, + dimensions: (() => { + if (!process.env.EMBEDDING_DIM) return undefined; + const dim = Number.parseInt(process.env.EMBEDDING_DIM, 10); + if (Number.isNaN(dim)) { + throw new Error("EMBEDDING_DIM must be an integer"); + } + return dim; + })(), });Explicit validation prevents hard-to-trace SDK errors.
packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)
9-16
: Return type is implicit – add it for clarity.A tiny nit:
initSettings
has no return value; declaring(): void
makes the intent explicit and avoids accidental future misuse.packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)
9-16
: Add explicitvoid
return type forinitSettings
.packages/create-llama/questions/ci.ts (1)
1-1
: Theimport
statement pulls the whole helpers file just for one function.If
getGpt41ModelConfig
is the lone export, okay; if not, use a named-import path such as"../helpers/models/getGpt41ModelConfig"
to keep bundle size down in ESM tree-shaking scenarios.
Not critical, but worth tracking.packages/create-llama/helpers/providers/ollama.ts (1)
20-28
:config
declared withconst
but mutated later – preferlet
or freeze.While mutating properties of a
const
object is legal, it sends mixed signals.
Either:
- Declare with
let
and mutate, or- Keep
const
and build a new object per step ({ ...config, model }
).Consistency aids maintainability.
packages/create-llama/helpers/providers/gemini.ts (1)
18-33
:config
mutability /isConfigured
closure caveat.
isConfigured
closes overconfig
; later mutations (model / embeddingModel) are fine, butapiKey
may be updated after the method is read by callers, yielding stale truthiness.Assign
isConfigured
after all mutations or compute lazily:isConfigured() { return !!this.apiKey || !!process.env.GOOGLE_API_KEY; }packages/create-llama/helpers/providers/huggingface.ts (1)
34-44
: Skip the prompt when there is only one available LLM modelBecause
MODELS
currently holds a single hard-coded entry, the user is forced through an unnecessary prompt. Eliminating the prompt whenMODELS.length === 1
keeps the simple-mode flow truly “simple”.-const { model } = await prompts( - { - type: "select", - name: "model", - message: "Which Hugging Face model would you like to use?", - choices: MODELS.map(toChoice), - initial: 0, - }, - questionHandlers, -); -config.model = model; +if (MODELS.length === 1) { + config.model = MODELS[0]; +} else { + const { model } = await prompts( + { + type: "select", + name: "model", + message: "Which Hugging Face model would you like to use?", + choices: MODELS.map(toChoice), + initial: 0, + }, + questionHandlers, + ); + config.model = model; +}packages/create-llama/helpers/providers/groq.ts (1)
118-133
: Duplicate logic across providers – consider extracting a shared embedding-model promptThe embedding-model prompt block is identical in at least HuggingFace, Anthropic, Azure, Groq, …
A tiny helper such aspromptForEmbeddingModel(EMBEDDING_MODELS)
would remove ~10 repeated lines per provider and make future changes (e.g., adding a “custom” option) one-shot.packages/create-llama/helpers/providers/index.ts (1)
50-76
: Replace longswitch
with a provider-function mapThe growing
switch
is starting to look unmaintainable; every new provider touches this file. A mapping keeps the logic declarative and avoids forgottenbreak
s.- let modelConfig: ModelConfigParams; - switch (modelProvider) { - case "ollama": - modelConfig = await askOllamaQuestions(); - break; - case "groq": - modelConfig = await askGroqQuestions(); - break; - ... - default: - modelConfig = await askOpenAIQuestions(); - } + const providerToFn: Record<string, () => Promise<ModelConfigParams>> = { + openai: askOpenAIQuestions, + groq: askGroqQuestions, + ollama: askOllamaQuestions, + anthropic: askAnthropicQuestions, + gemini: askGeminiQuestions, + mistral: askMistralQuestions, + "azure-openai": askAzureQuestions, + "t-systems": askLLMHubQuestions, + huggingface: askHuggingfaceQuestions, + }; + + const fn = providerToFn[modelProvider] ?? askOpenAIQuestions; + const modelConfig = await fn();packages/create-llama/helpers/providers/anthropic.ts (2)
51-62
: Whitespace key → invalid key
prompts
returns an empty string when the user just presses space(s).
isConfigured()
would then wrongly regard" "
as a valid API key. Trim before assignment.-config.apiKey = key || process.env.ANTHROPIC_API_KEY; +const trimmed = key?.trim(); +config.apiKey = trimmed ? trimmed : process.env.ANTHROPIC_API_KEY;
64-91
: Shared code duplication – extract common helperSame observation as in Groq: embedding-model prompt and dimension lookup are duplicated across providers. A helper such as
export async function promptForEmbedding<T extends Record<string, { dimensions:number }>>( models: T, message = "Which embedding model would you like to use?", ) { const { embeddingModel } = await prompts( { type: "select", name: "embeddingModel", message, choices: Object.keys(models).map(toChoice), initial: 0, }, questionHandlers, ); return { name: embeddingModel, dimensions: models[embeddingModel].dimensions }; }would shrink each provider implementation to three lines.
packages/create-llama/helpers/typescript.ts (1)
39-48
: Provider settings are copied twice – consider DRYing the logic
installLlamaIndexServerTemplate()
now copies
components/providers/typescript/<provider>/**
intosrc/app
(here), whileinstallLegacyTSTemplate()
performs an almost identical copy into<engine>
(lines 262-266). If both flows are exercised for the same project structure this creates duplicate files and maintenance overhead.Suggestion: extract a shared helper, or decide on a single destination (
engine
vssrc/app
) based on template type to avoid redundant copies.packages/create-llama/helpers/providers/mistral.ts (1)
34-45
: Minor: redundant prompt execution guard
config.apiKey
is initialised fromprocess.env.MISTRAL_API_KEY
.
Because of that,if (!config.apiKey)
already prevents the prompt when the env-var is set. The secondary check inside the prompt message (“leave blank to use … env variable”) is therefore never reached.No functional problem – just noting the redundant branch for future cleanup.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (23)
packages/create-llama/helpers/models.ts
(1 hunks)packages/create-llama/helpers/providers/anthropic.ts
(2 hunks)packages/create-llama/helpers/providers/azure.ts
(3 hunks)packages/create-llama/helpers/providers/gemini.ts
(2 hunks)packages/create-llama/helpers/providers/groq.ts
(2 hunks)packages/create-llama/helpers/providers/huggingface.ts
(2 hunks)packages/create-llama/helpers/providers/index.ts
(2 hunks)packages/create-llama/helpers/providers/llmhub.ts
(3 hunks)packages/create-llama/helpers/providers/mistral.ts
(2 hunks)packages/create-llama/helpers/providers/ollama.ts
(2 hunks)packages/create-llama/helpers/providers/openai.ts
(4 hunks)packages/create-llama/helpers/typescript.ts
(3 hunks)packages/create-llama/questions/ci.ts
(2 hunks)packages/create-llama/questions/questions.ts
(0 hunks)packages/create-llama/questions/simple.ts
(3 hunks)packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts
(1 hunks)packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts
(1 hunks)packages/create-llama/templates/components/providers/typescript/gemini/settings.ts
(1 hunks)packages/create-llama/templates/components/providers/typescript/groq/settings.ts
(1 hunks)packages/create-llama/templates/components/providers/typescript/mistral/settings.ts
(1 hunks)packages/create-llama/templates/components/providers/typescript/ollama/settings.ts
(1 hunks)packages/create-llama/templates/components/providers/typescript/openai/settings.ts
(1 hunks)packages/create-llama/templates/components/settings/typescript/settings.ts
(0 hunks)
💤 Files with no reviewable changes (2)
- packages/create-llama/questions/questions.ts
- packages/create-llama/templates/components/settings/typescript/settings.ts
🧰 Additional context used
🧬 Code Graph Analysis (16)
packages/create-llama/templates/components/providers/typescript/groq/settings.ts (6)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (1)
initSettings
(4-49)packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (1)
initSettings
(4-16)packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)
initSettings
(8-19)packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)
initSettings
(9-16)packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)
initSettings
(9-16)packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)
initSettings
(4-17)
packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (6)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (1)
initSettings
(4-49)packages/create-llama/templates/components/providers/typescript/groq/settings.ts (1)
initSettings
(5-18)packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)
initSettings
(8-19)packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)
initSettings
(9-16)packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)
initSettings
(9-16)packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)
initSettings
(4-17)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (6)
packages/create-llama/templates/components/providers/typescript/groq/settings.ts (1)
initSettings
(5-18)packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (1)
initSettings
(4-16)packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)
initSettings
(8-19)packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)
initSettings
(9-16)packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)
initSettings
(9-16)packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)
initSettings
(4-17)
packages/create-llama/questions/ci.ts (1)
packages/create-llama/helpers/models.ts (1)
getGpt41ModelConfig
(3-12)
packages/create-llama/helpers/models.ts (1)
packages/create-llama/helpers/types.ts (1)
ModelConfig
(14-21)
packages/create-llama/helpers/providers/ollama.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
ModelConfigParams
(18-18)
packages/create-llama/helpers/providers/huggingface.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
ModelConfigParams
(18-18)
packages/create-llama/helpers/providers/mistral.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
ModelConfigParams
(18-18)
packages/create-llama/helpers/providers/azure.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
ModelConfigParams
(18-18)
packages/create-llama/helpers/providers/groq.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
ModelConfigParams
(18-18)
packages/create-llama/helpers/providers/openai.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
ModelConfigParams
(18-18)
packages/create-llama/helpers/providers/llmhub.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
ModelConfigParams
(18-18)
packages/create-llama/questions/simple.ts (3)
packages/create-llama/helpers/models.ts (1)
getGpt41ModelConfig
(3-12)packages/create-llama/helpers/types.ts (1)
ModelConfig
(14-21)packages/create-llama/helpers/providers/index.ts (1)
askModelConfig
(20-81)
packages/create-llama/helpers/providers/gemini.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
ModelConfigParams
(18-18)
packages/create-llama/helpers/typescript.ts (2)
packages/create-llama/helpers/types.ts (1)
InstallTemplateArgs
(96-116)packages/create-llama/helpers/copy.ts (1)
copy
(13-49)
packages/create-llama/helpers/providers/anthropic.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
ModelConfigParams
(18-18)
⏰ Context from checks skipped due to timeout of 90000ms (57)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
- GitHub Check: Unit Tests (ubuntu-latest, 3.9)
- GitHub Check: Unit Tests (windows-latest, 3.9)
- GitHub Check: lint
🔇 Additional comments (3)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (1)
4-4
: Consistent naming: renamingsetupProvider
toinitSettings
This change aligns with other provider templates (e.g., Gemini, Anthropic) and standardizes the initialization entry point.packages/create-llama/helpers/providers/llmhub.ts (1)
152-155
: Embedding-dimension mapping differs from OpenAI helper – double-check correctness
getDimensions()
returns768
only for"text-embedding-004"
and1536
for everything else, whereas the OpenAI counterpart maps"text-embedding-3-large"
→1024
. If LLMHub forwards requests to the same OpenAI models, this discrepancy will silently produce the wrong dimension count (e.g. 1536 instead of 1024 fortext-embedding-3-large
).Please confirm the dimensionality for each LLMHub embedding model and align the helper functions for consistency.
packages/create-llama/questions/simple.ts (1)
185-190
:modelConfig
override path is clear – good jobThe fallback to GPT-4.1 and the optional interactive override are neatly separated; the code is easy to follow.
const { embeddingModel } = await prompts( | ||
{ | ||
type: "select", | ||
name: "embeddingModel", | ||
message: "Which embedding model would you like to use?", | ||
choices: Object.keys(EMBEDDING_MODELS).map(toChoice), | ||
initial: 0, | ||
}, | ||
questionHandlers, | ||
); | ||
await ensureModel(embeddingModel); | ||
config.embeddingModel = embeddingModel; | ||
config.dimensions = EMBEDDING_MODELS[embeddingModel].dimensions; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same validation gap for embeddingModel
.
Guard against empty or aborted prompt to avoid TypeError
when indexing EMBEDDING_MODELS[embeddingModel]
.
🤖 Prompt for AI Agents
In packages/create-llama/helpers/providers/ollama.ts around lines 43 to 56, the
code does not validate if the prompt result for embeddingModel is empty or
aborted, which can cause a TypeError when accessing
EMBEDDING_MODELS[embeddingModel]. Add a check after the prompt to verify
embeddingModel is defined and valid before calling ensureModel and indexing
EMBEDDING_MODELS. If invalid, handle the error or abort gracefully to prevent
runtime exceptions.
const { model } = await prompts( | ||
{ | ||
type: "select", | ||
name: "model", | ||
message: "Which LLM model would you like to use?", | ||
choices: MODELS.map(toChoice), | ||
initial: 0, | ||
}, | ||
questionHandlers, | ||
); | ||
await ensureModel(model); | ||
config.model = model; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prompt result not validated – user can press and get undefined
.
If the user aborts or just hits return, model
will be undefined
, causing ensureModel(undefined)
and later assignment.
Add a fallback:
const chosen = model ?? DEFAULT_MODEL;
or validate with validate
option of prompts
.
🤖 Prompt for AI Agents
In packages/create-llama/helpers/providers/ollama.ts around lines 30 to 42, the
prompt result for selecting the model is not validated, allowing the user to
press Enter and get an undefined model. To fix this, add a fallback by assigning
the model to a default value if undefined, for example, use `const chosen =
model ?? DEFAULT_MODEL;` before calling ensureModel and assigning to
config.model, or alternatively use the `validate` option in the prompt to
prevent undefined values.
const { model } = await prompts( | ||
{ | ||
type: "select", | ||
name: "model", | ||
message: "Which LLM model would you like to use?", | ||
choices: MODELS.map(toChoice), | ||
initial: 0, | ||
}, | ||
questionHandlers, | ||
); | ||
config.model = model; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LLM model prompt – same missing-validation issue as Ollama.
Users can abort and yield undefined
, propagating invalid state. Add defaulting or validation.
🤖 Prompt for AI Agents
In packages/create-llama/helpers/providers/gemini.ts around lines 48 to 59, the
prompt for selecting the LLM model can return undefined if the user aborts,
leading to invalid state. Add validation after the prompt to check if the model
is undefined, and if so, assign a default model value or handle the abort case
gracefully to ensure config.model is always valid.
const { embeddingModel } = await prompts( | ||
{ | ||
type: "select", | ||
name: "embeddingModel", | ||
message: "Which embedding model would you like to use?", | ||
choices: Object.keys(EMBEDDING_MODELS).map(toChoice), | ||
initial: 0, | ||
}, | ||
questionHandlers, | ||
); | ||
config.embeddingModel = embeddingModel; | ||
config.dimensions = EMBEDDING_MODELS[embeddingModel].dimensions; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Embedding model prompt – guard against undefined.
Protect against abort / empty selection to prevent runtime errors when indexing EMBEDDING_MODELS
.
🤖 Prompt for AI Agents
In packages/create-llama/helpers/providers/gemini.ts around lines 60 to 72, the
code assigns config.embeddingModel and config.dimensions based on the user's
selection without checking if embeddingModel is undefined or empty. To fix this,
add a guard clause after the prompt to verify embeddingModel is defined and
valid before accessing EMBEDDING_MODELS[embeddingModel]. If embeddingModel is
undefined or invalid, handle the case gracefully, such as by setting defaults or
exiting early to prevent runtime errors.
const { embeddingModel } = await prompts( | ||
{ | ||
type: "select", | ||
name: "embeddingModel", | ||
message: "Which embedding model would you like to use?", | ||
choices: Object.keys(EMBEDDING_MODELS).map(toChoice), | ||
initial: 0, | ||
}, | ||
questionHandlers, | ||
); | ||
config.embeddingModel = embeddingModel; | ||
config.dimensions = EMBEDDING_MODELS[embeddingModel].dimensions; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against unknown embedding models
If the user somehow selects / enters an embedding model that is not present in EMBEDDING_MODELS
, EMBEDDING_MODELS[embeddingModel]
will be undefined
and the subsequent .dimensions
access throws. A small defensive check avoids a hard crash.
-config.embeddingModel = embeddingModel;
-config.dimensions = EMBEDDING_MODELS[embeddingModel].dimensions;
+config.embeddingModel = embeddingModel;
+
+const data = EMBEDDING_MODELS[embeddingModel];
+if (!data) {
+ throw new Error(`Unsupported embedding model: ${embeddingModel}`);
+}
+config.dimensions = data.dimensions;
🤖 Prompt for AI Agents
In packages/create-llama/helpers/providers/huggingface.ts around lines 46 to 57,
add a defensive check after the user selects the embeddingModel to verify that
it exists in EMBEDDING_MODELS before accessing its dimensions property. If the
selected embeddingModel is not found, handle the case gracefully by either
prompting the user again or throwing a controlled error to prevent a runtime
crash due to undefined access.
Summary by CodeRabbit