Skip to content

feat: re-add ask models for simple mode #691

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

marcusschiesser
Copy link
Collaborator

@marcusschiesser marcusschiesser commented Jun 17, 2025

Summary by CodeRabbit

  • New Features
    • Added support for configuring the GPT-4.1 model with improved setup and selection options.
  • Refactor
    • Simplified and unified the model and embedding model selection process across all providers for a more consistent user experience.
    • Streamlined prompts to always ask for model and embedding selections, regardless of previous conditions.
    • Centralized model configuration logic for easier maintenance and improved reliability.
  • Chores
    • Updated function names for provider settings to improve clarity.
    • Removed unused parameters and types for cleaner configuration flows.

Copy link

changeset-bot bot commented Jun 17, 2025

⚠️ No Changeset found

Latest commit: 9d7778d

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link

coderabbitai bot commented Jun 17, 2025

Walkthrough

This update refactors model configuration prompts and provider setup in the codebase. It removes conditional logic and parameters from provider question functions, making model and embedding model prompts unconditional. Model config creation is centralized, and provider-specific template setup functions are renamed. Type and argument changes are propagated throughout, with some template files updated or removed.

Changes

File(s) Change Summary
helpers/models.ts Added getGpt41ModelConfig helper for GPT-4.1 model configuration.
helpers/providers/anthropic.ts
helpers/providers/azure.ts
helpers/providers/gemini.ts
helpers/providers/groq.ts
helpers/providers/huggingface.ts
helpers/providers/llmhub.ts
helpers/providers/mistral.ts
helpers/providers/ollama.ts
helpers/providers/openai.ts
Simplified provider question functions: removed parameters and conditional logic; now always prompt for models.
helpers/providers/index.ts Refactored askModelConfig to remove askModels and openAiKey parameters; simplified provider selection.
helpers/typescript.ts Updated installLlamaIndexServerTemplate to accept and use modelConfig.
questions/ci.ts Switched to using getGpt41ModelConfig for CI model config; removed async and conditional logic.
questions/questions.ts Removed openAiKey and askModels from askModelConfig call in askProQuestions.
questions/simple.ts Centralized GPT-4.1 config, made modelConfig required, and added conditional override via prompt.
templates/components/providers/typescript/.../settings.ts Renamed exported provider setup function from setupProvider to initSettings.
templates/components/settings/typescript/settings.ts Deleted provider-agnostic initSettings file.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI
    participant ProviderQuestion
    participant ModelConfigHelper

    User->>CLI: Start setup
    CLI->>ProviderQuestion: Prompt for API key (if needed)
    CLI->>ProviderQuestion: Prompt for LLM model
    CLI->>ProviderQuestion: Prompt for embedding model
    ProviderQuestion-->>CLI: Return config
    CLI->>ModelConfigHelper: (Optional) getGpt41ModelConfig (for CI/simple flows)
    ModelConfigHelper-->>CLI: Return model config
    CLI-->>User: Complete setup with selected config
Loading

Possibly related PRs

  • Fix Azure OpenAI issues #193: Refactors askAzureQuestions by changing API key sourcing and removing user prompts, closely related to similar refactoring in this PR.

Poem

A rabbit hopped through fields of code,
Prompting models with a lighter load.
No more "if"s or tangled threads,
Just simple choices straight ahead.
Providers now all act the same—
"initSettings" is their new name!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🔭 Outside diff range comments (11)
packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (1)

4-16: Fail fast when MODEL / EMBEDDING_MODEL env vars are missing

Both fields currently fall back to the empty string, which Ollama rejects at runtime with a vague model not found error.
Guard early so mis-configuration is detected immediately:

 export function initSettings() {
   const config = {
     host: process.env.OLLAMA_BASE_URL ?? "http://127.0.0.1:11434",
   };
+
+  if (!process.env.MODEL || !process.env.EMBEDDING_MODEL) {
+    throw new Error(
+      "Required env vars MODEL and/or EMBEDDING_MODEL are not set for Ollama"
+    );
+  }
+
   Settings.llm = new Ollama({
     model: process.env.MODEL!,
     config,
   });
   Settings.embedModel = new OllamaEmbedding({
     model: process.env.EMBEDDING_MODEL!,
     config,
   });
 }

This avoids silent misconfigurations and aligns with the stricter checks added for other providers.

packages/create-llama/templates/components/providers/typescript/groq/settings.ts (1)

5-17: Guard against missing MODEL / EMBEDDING_MODEL before non-null assertions

process.env.MODEL! and embedModelMap[process.env.EMBEDDING_MODEL!] assume the vars are always present.
If they are undefined the app starts, then explodes with an obscure error from the SDK.

 export function initSettings() {
+  if (!process.env.MODEL || !process.env.EMBEDDING_MODEL) {
+    throw new Error(
+      "MODEL and EMBEDDING_MODEL must be set before initialising Groq provider"
+    );
+  }
   const embedModelMap: Record<string, string> = {
     "all-MiniLM-L6-v2": "Xenova/all-MiniLM-L6-v2",
     "all-mpnet-base-v2": "Xenova/all-mpnet-base-v2",
   };
@@
   Settings.embedModel = new HuggingFaceEmbedding({
     modelType: embedModelMap[process.env.EMBEDDING_MODEL!],
   });
 }

This keeps the failure surface small and messages clear.

packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)

8-18: Non-null assertions mask config errors

process.env.MODEL! and process.env.EMBEDDING_MODEL! are asserted non-null, yet nothing ensures they are.
Prefer explicit validation to prevent runtime surprises:

 export function initSettings() {
+  if (!process.env.MODEL || !process.env.EMBEDDING_MODEL) {
+    throw new Error(
+      "Anthropic provider requires MODEL and EMBEDDING_MODEL env vars"
+    );
+  }
   const embedModelMap: Record<string, string> = {

Also consider lifting embedModelMap to a shared util to avoid duplication across providers.

packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)

9-16: Guard against missing MODEL / EMBEDDING_MODEL env vars.

process.env.MODEL (and the embedding counterpart) are blindly cast with as.
If the variable is undefined the SDK will throw later at runtime, yet the compiler remains silent.

-  model: process.env.MODEL as keyof typeof ALL_AVAILABLE_MISTRAL_MODELS,
+  model: assertEnv("MODEL") as keyof typeof ALL_AVAILABLE_MISTRAL_MODELS,

Consider a small helper:

function assertEnv(name: string): string {
  const v = process.env[name];
  if (!v) throw new Error(`Environment variable ${name} must be defined`);
  return v;
}
packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)

9-16: Same env-var null-safety concern as Mistral settings.

Blind casts of process.env.MODEL / EMBEDDING_MODEL may explode later.
Reuse the same assertEnv helper (or similar) to fail fast and surface configuration errors early.

packages/create-llama/questions/ci.ts (1)

18-25: async is now redundant – drop it to simplify.

getCIQuestionResults no longer awaits anything; returning a plain object wrapped in a resolved Promise is superfluous.

-export async function getCIQuestionResults(
+export function getCIQuestionResults(

and adjust the return type accordingly (QuestionResults, not Promise<QuestionResults>).
Less cognitive load and slightly faster execution.

packages/create-llama/helpers/providers/ollama.ts (1)

60-84: process.exit(1) in helper breaks consumers.

ensureModel kills the entire Node process on failure.
If this helper is ever reused in a library or inside Jest tests, it will terminate the runner unexpectedly.

Bubble the error and let the caller decide:

-      console.log(red(...));
-      process.exit(1);
+      throw new Error(red(`Model ${modelName} missing. Run 'ollama pull ${modelName}'.`));
packages/create-llama/helpers/providers/gemini.ts (1)

35-47: Prompt for API key can expose secrets in shell history.

Typing the key in an echoed prompt prints it back in clear text.
Use type: "password" so the terminal masks input.

- type: "text",
+ type: "password",
packages/create-llama/helpers/providers/groq.ts (1)

91-104: API key can still be empty after the prompt

If the user simply hits when asked for the key and no GROQ_API_KEY env var is set, we move on with an empty string.
getAvailableModelChoicesGroq(config.apiKey!) then throws, but the resulting stack trace is less user-friendly than an early validation.

 if (!config.apiKey) {
   const { key } = await prompts(
@@
   );
-  config.apiKey = key || process.env.GROQ_API_KEY;
+  config.apiKey = key || process.env.GROQ_API_KEY;
+
+  if (!config.apiKey?.trim()) {
+    console.log(
+      red(
+        "A Groq API key is required to fetch model choices. Aborting.",
+      ),
+    );
+    process.exit(1);
+  }
 }
packages/create-llama/helpers/providers/azure.ts (1)

54-64: isConfigured() always returns false – is that intentional?

For Azure the comment says the provider “can’t be fully configured”, but returning false irrespective of the presence of AZURE_OPENAI_KEY suppresses downstream checks that merely need the key (e.g., early CI validation).

-isConfigured(): boolean {
-  // the Azure model provider can't be fully configured as endpoint and deployment names have to be configured with env variables
-  return false;
-},
+isConfigured(): boolean {
+  return Boolean(config.apiKey ?? process.env.AZURE_OPENAI_KEY);
+},

If additional env variables are indeed mandatory, consider checking those explicitly so users get a precise error instead of a blanket “not configured”.

packages/create-llama/helpers/providers/openai.ts (1)

31-52: config.apiKey may be undefined in CI → getAvailableModelChoices() will throw

config.apiKey is only guaranteed to be populated when
a) the environment variable is set, or
b) the interactive prompt runs.

Inside CI (isCI === true) the prompt is skipped, so a missing OPENAI_API_KEY leads to an undefined key that is subsequently passed to getAvailableModelChoices(...) (line 58/70). The helper immediately throws:

if (!apiKey) {
  throw new Error("need OpenAI key to retrieve model choices");
}

→ Any CI job without the env-var will now fail even though interactive input is impossible.

31-32   if (!config.apiKey && !isCI) {
+31a+   // In CI we must *fail early* with a clear message *before* hitting the remote call.
+31b+   if (!config.apiKey && isCI) {
+31c+     throw new Error(
+31d+       "OPENAI_API_KEY is not set in the CI environment – required for model discovery",
+31e+     );
+31f+   }

Alternatively, short-circuit the model/embedding prompts when the key is absent in CI.

🧹 Nitpick comments (14)
packages/create-llama/helpers/models.ts (1)

9-11: isConfigured should rely on the object’s apiKey, not the captured param

isConfigured closes over the openAiKey argument.
If the returned config object is later mutated (config.apiKey = …), isConfigured() will still look at the stale captured value and give the wrong answer.

-  isConfigured(): boolean {
-    return !!openAiKey;
-  },
+  isConfigured(): boolean {
+    return !!this.apiKey;
+  },

This keeps the checker truthful and avoids surprising behaviour.

packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)

4-16: Handle parseInt result to avoid passing NaN as dimensions

If EMBEDDING_DIM is set but not a valid integer, parseInt returns NaN, which propagates silently to the OpenAI SDK.

   Settings.embedModel = new OpenAIEmbedding({
     model: process.env.EMBEDDING_MODEL,
-    dimensions: process.env.EMBEDDING_DIM
-      ? parseInt(process.env.EMBEDDING_DIM)
-      : undefined,
+    dimensions: (() => {
+      if (!process.env.EMBEDDING_DIM) return undefined;
+      const dim = Number.parseInt(process.env.EMBEDDING_DIM, 10);
+      if (Number.isNaN(dim)) {
+        throw new Error("EMBEDDING_DIM must be an integer");
+      }
+      return dim;
+    })(),
   });

Explicit validation prevents hard-to-trace SDK errors.

packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)

9-16: Return type is implicit – add it for clarity.

A tiny nit: initSettings has no return value; declaring (): void makes the intent explicit and avoids accidental future misuse.

packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)

9-16: Add explicit void return type for initSettings.

packages/create-llama/questions/ci.ts (1)

1-1: The import statement pulls the whole helpers file just for one function.

If getGpt41ModelConfig is the lone export, okay; if not, use a named-import path such as "../helpers/models/getGpt41ModelConfig" to keep bundle size down in ESM tree-shaking scenarios.
Not critical, but worth tracking.

packages/create-llama/helpers/providers/ollama.ts (1)

20-28: config declared with const but mutated later – prefer let or freeze.

While mutating properties of a const object is legal, it sends mixed signals.
Either:

  1. Declare with let and mutate, or
  2. Keep const and build a new object per step ({ ...config, model }).

Consistency aids maintainability.

packages/create-llama/helpers/providers/gemini.ts (1)

18-33: config mutability / isConfigured closure caveat.

isConfigured closes over config; later mutations (model / embeddingModel) are fine, but apiKey may be updated after the method is read by callers, yielding stale truthiness.

Assign isConfigured after all mutations or compute lazily:

isConfigured() {
  return !!this.apiKey || !!process.env.GOOGLE_API_KEY;
}
packages/create-llama/helpers/providers/huggingface.ts (1)

34-44: Skip the prompt when there is only one available LLM model

Because MODELS currently holds a single hard-coded entry, the user is forced through an unnecessary prompt. Eliminating the prompt when MODELS.length === 1 keeps the simple-mode flow truly “simple”.

-const { model } = await prompts(
-  {
-    type: "select",
-    name: "model",
-    message: "Which Hugging Face model would you like to use?",
-    choices: MODELS.map(toChoice),
-    initial: 0,
-  },
-  questionHandlers,
-);
-config.model = model;
+if (MODELS.length === 1) {
+  config.model = MODELS[0];
+} else {
+  const { model } = await prompts(
+    {
+      type: "select",
+      name: "model",
+      message: "Which Hugging Face model would you like to use?",
+      choices: MODELS.map(toChoice),
+      initial: 0,
+    },
+    questionHandlers,
+  );
+  config.model = model;
+}
packages/create-llama/helpers/providers/groq.ts (1)

118-133: Duplicate logic across providers – consider extracting a shared embedding-model prompt

The embedding-model prompt block is identical in at least HuggingFace, Anthropic, Azure, Groq, …
A tiny helper such as promptForEmbeddingModel(EMBEDDING_MODELS) would remove ~10 repeated lines per provider and make future changes (e.g., adding a “custom” option) one-shot.

packages/create-llama/helpers/providers/index.ts (1)

50-76: Replace long switch with a provider-function map

The growing switch is starting to look unmaintainable; every new provider touches this file. A mapping keeps the logic declarative and avoids forgotten breaks.

-  let modelConfig: ModelConfigParams;
-  switch (modelProvider) {
-    case "ollama":
-      modelConfig = await askOllamaQuestions();
-      break;
-    case "groq":
-      modelConfig = await askGroqQuestions();
-      break;
-    ...
-    default:
-      modelConfig = await askOpenAIQuestions();
-  }
+  const providerToFn: Record<string, () => Promise<ModelConfigParams>> = {
+    openai: askOpenAIQuestions,
+    groq: askGroqQuestions,
+    ollama: askOllamaQuestions,
+    anthropic: askAnthropicQuestions,
+    gemini: askGeminiQuestions,
+    mistral: askMistralQuestions,
+    "azure-openai": askAzureQuestions,
+    "t-systems": askLLMHubQuestions,
+    huggingface: askHuggingfaceQuestions,
+  };
+
+  const fn = providerToFn[modelProvider] ?? askOpenAIQuestions;
+  const modelConfig = await fn();
packages/create-llama/helpers/providers/anthropic.ts (2)

51-62: Whitespace key → invalid key

prompts returns an empty string when the user just presses space(s).
isConfigured() would then wrongly regard " " as a valid API key. Trim before assignment.

-config.apiKey = key || process.env.ANTHROPIC_API_KEY;
+const trimmed = key?.trim();
+config.apiKey = trimmed ? trimmed : process.env.ANTHROPIC_API_KEY;

64-91: Shared code duplication – extract common helper

Same observation as in Groq: embedding-model prompt and dimension lookup are duplicated across providers. A helper such as

export async function promptForEmbedding<T extends Record<string, { dimensions:number }>>(
  models: T,
  message = "Which embedding model would you like to use?",
) {
  const { embeddingModel } = await prompts(
    {
      type: "select",
      name: "embeddingModel",
      message,
      choices: Object.keys(models).map(toChoice),
      initial: 0,
    },
    questionHandlers,
  );
  return { name: embeddingModel, dimensions: models[embeddingModel].dimensions };
}

would shrink each provider implementation to three lines.

packages/create-llama/helpers/typescript.ts (1)

39-48: Provider settings are copied twice – consider DRYing the logic

installLlamaIndexServerTemplate() now copies
components/providers/typescript/<provider>/** into src/app (here), while installLegacyTSTemplate() performs an almost identical copy into <engine> (lines 262-266). If both flows are exercised for the same project structure this creates duplicate files and maintenance overhead.

Suggestion: extract a shared helper, or decide on a single destination (engine vs src/app) based on template type to avoid redundant copies.

packages/create-llama/helpers/providers/mistral.ts (1)

34-45: Minor: redundant prompt execution guard

config.apiKey is initialised from process.env.MISTRAL_API_KEY.
Because of that, if (!config.apiKey) already prevents the prompt when the env-var is set. The secondary check inside the prompt message (“leave blank to use … env variable”) is therefore never reached.

No functional problem – just noting the redundant branch for future cleanup.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a221bc6 and 9d7778d.

📒 Files selected for processing (23)
  • packages/create-llama/helpers/models.ts (1 hunks)
  • packages/create-llama/helpers/providers/anthropic.ts (2 hunks)
  • packages/create-llama/helpers/providers/azure.ts (3 hunks)
  • packages/create-llama/helpers/providers/gemini.ts (2 hunks)
  • packages/create-llama/helpers/providers/groq.ts (2 hunks)
  • packages/create-llama/helpers/providers/huggingface.ts (2 hunks)
  • packages/create-llama/helpers/providers/index.ts (2 hunks)
  • packages/create-llama/helpers/providers/llmhub.ts (3 hunks)
  • packages/create-llama/helpers/providers/mistral.ts (2 hunks)
  • packages/create-llama/helpers/providers/ollama.ts (2 hunks)
  • packages/create-llama/helpers/providers/openai.ts (4 hunks)
  • packages/create-llama/helpers/typescript.ts (3 hunks)
  • packages/create-llama/questions/ci.ts (2 hunks)
  • packages/create-llama/questions/questions.ts (0 hunks)
  • packages/create-llama/questions/simple.ts (3 hunks)
  • packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/groq/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1 hunks)
  • packages/create-llama/templates/components/settings/typescript/settings.ts (0 hunks)
💤 Files with no reviewable changes (2)
  • packages/create-llama/questions/questions.ts
  • packages/create-llama/templates/components/settings/typescript/settings.ts
🧰 Additional context used
🧬 Code Graph Analysis (16)
packages/create-llama/templates/components/providers/typescript/groq/settings.ts (6)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (1)
  • initSettings (4-49)
packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (1)
  • initSettings (4-16)
packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)
  • initSettings (8-19)
packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)
  • initSettings (4-17)
packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (6)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (1)
  • initSettings (4-49)
packages/create-llama/templates/components/providers/typescript/groq/settings.ts (1)
  • initSettings (5-18)
packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)
  • initSettings (8-19)
packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)
  • initSettings (4-17)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (6)
packages/create-llama/templates/components/providers/typescript/groq/settings.ts (1)
  • initSettings (5-18)
packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (1)
  • initSettings (4-16)
packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)
  • initSettings (8-19)
packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)
  • initSettings (4-17)
packages/create-llama/questions/ci.ts (1)
packages/create-llama/helpers/models.ts (1)
  • getGpt41ModelConfig (3-12)
packages/create-llama/helpers/models.ts (1)
packages/create-llama/helpers/types.ts (1)
  • ModelConfig (14-21)
packages/create-llama/helpers/providers/ollama.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/huggingface.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/mistral.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/azure.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/groq.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/openai.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/llmhub.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/questions/simple.ts (3)
packages/create-llama/helpers/models.ts (1)
  • getGpt41ModelConfig (3-12)
packages/create-llama/helpers/types.ts (1)
  • ModelConfig (14-21)
packages/create-llama/helpers/providers/index.ts (1)
  • askModelConfig (20-81)
packages/create-llama/helpers/providers/gemini.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/typescript.ts (2)
packages/create-llama/helpers/types.ts (1)
  • InstallTemplateArgs (96-116)
packages/create-llama/helpers/copy.ts (1)
  • copy (13-49)
packages/create-llama/helpers/providers/anthropic.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
⏰ Context from checks skipped due to timeout of 90000ms (57)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: lint
🔇 Additional comments (3)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (1)

4-4: Consistent naming: renaming setupProvider to initSettings
This change aligns with other provider templates (e.g., Gemini, Anthropic) and standardizes the initialization entry point.

packages/create-llama/helpers/providers/llmhub.ts (1)

152-155: Embedding-dimension mapping differs from OpenAI helper – double-check correctness

getDimensions() returns 768 only for "text-embedding-004" and 1536 for everything else, whereas the OpenAI counterpart maps "text-embedding-3-large"1024. If LLMHub forwards requests to the same OpenAI models, this discrepancy will silently produce the wrong dimension count (e.g. 1536 instead of 1024 for text-embedding-3-large).

Please confirm the dimensionality for each LLMHub embedding model and align the helper functions for consistency.

packages/create-llama/questions/simple.ts (1)

185-190: modelConfig override path is clear – good job

The fallback to GPT-4.1 and the optional interactive override are neatly separated; the code is easy to follow.

Comment on lines +43 to 56
const { embeddingModel } = await prompts(
{
type: "select",
name: "embeddingModel",
message: "Which embedding model would you like to use?",
choices: Object.keys(EMBEDDING_MODELS).map(toChoice),
initial: 0,
},
questionHandlers,
);
await ensureModel(embeddingModel);
config.embeddingModel = embeddingModel;
config.dimensions = EMBEDDING_MODELS[embeddingModel].dimensions;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Same validation gap for embeddingModel.

Guard against empty or aborted prompt to avoid TypeError when indexing EMBEDDING_MODELS[embeddingModel].

🤖 Prompt for AI Agents
In packages/create-llama/helpers/providers/ollama.ts around lines 43 to 56, the
code does not validate if the prompt result for embeddingModel is empty or
aborted, which can cause a TypeError when accessing
EMBEDDING_MODELS[embeddingModel]. Add a check after the prompt to verify
embeddingModel is defined and valid before calling ensureModel and indexing
EMBEDDING_MODELS. If invalid, handle the error or abort gracefully to prevent
runtime exceptions.

Comment on lines +30 to 42
const { model } = await prompts(
{
type: "select",
name: "model",
message: "Which LLM model would you like to use?",
choices: MODELS.map(toChoice),
initial: 0,
},
questionHandlers,
);
await ensureModel(model);
config.model = model;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Prompt result not validated – user can press and get undefined.

If the user aborts or just hits return, model will be undefined, causing ensureModel(undefined) and later assignment.
Add a fallback:

const chosen = model ?? DEFAULT_MODEL;

or validate with validate option of prompts.

🤖 Prompt for AI Agents
In packages/create-llama/helpers/providers/ollama.ts around lines 30 to 42, the
prompt result for selecting the model is not validated, allowing the user to
press Enter and get an undefined model. To fix this, add a fallback by assigning
the model to a default value if undefined, for example, use `const chosen =
model ?? DEFAULT_MODEL;` before calling ensureModel and assigning to
config.model, or alternatively use the `validate` option in the prompt to
prevent undefined values.

Comment on lines +48 to 59
const { model } = await prompts(
{
type: "select",
name: "model",
message: "Which LLM model would you like to use?",
choices: MODELS.map(toChoice),
initial: 0,
},
questionHandlers,
);
config.model = model;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

LLM model prompt – same missing-validation issue as Ollama.

Users can abort and yield undefined, propagating invalid state. Add defaulting or validation.

🤖 Prompt for AI Agents
In packages/create-llama/helpers/providers/gemini.ts around lines 48 to 59, the
prompt for selecting the LLM model can return undefined if the user aborts,
leading to invalid state. Add validation after the prompt to check if the model
is undefined, and if so, assign a default model value or handle the abort case
gracefully to ensure config.model is always valid.

Comment on lines +60 to 72
const { embeddingModel } = await prompts(
{
type: "select",
name: "embeddingModel",
message: "Which embedding model would you like to use?",
choices: Object.keys(EMBEDDING_MODELS).map(toChoice),
initial: 0,
},
questionHandlers,
);
config.embeddingModel = embeddingModel;
config.dimensions = EMBEDDING_MODELS[embeddingModel].dimensions;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Embedding model prompt – guard against undefined.

Protect against abort / empty selection to prevent runtime errors when indexing EMBEDDING_MODELS.

🤖 Prompt for AI Agents
In packages/create-llama/helpers/providers/gemini.ts around lines 60 to 72, the
code assigns config.embeddingModel and config.dimensions based on the user's
selection without checking if embeddingModel is undefined or empty. To fix this,
add a guard clause after the prompt to verify embeddingModel is defined and
valid before accessing EMBEDDING_MODELS[embeddingModel]. If embeddingModel is
undefined or invalid, handle the case gracefully, such as by setting defaults or
exiting early to prevent runtime errors.

Comment on lines +46 to +57
const { embeddingModel } = await prompts(
{
type: "select",
name: "embeddingModel",
message: "Which embedding model would you like to use?",
choices: Object.keys(EMBEDDING_MODELS).map(toChoice),
initial: 0,
},
questionHandlers,
);
config.embeddingModel = embeddingModel;
config.dimensions = EMBEDDING_MODELS[embeddingModel].dimensions;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Guard against unknown embedding models

If the user somehow selects / enters an embedding model that is not present in EMBEDDING_MODELS, EMBEDDING_MODELS[embeddingModel] will be undefined and the subsequent .dimensions access throws. A small defensive check avoids a hard crash.

-config.embeddingModel = embeddingModel;
-config.dimensions = EMBEDDING_MODELS[embeddingModel].dimensions;
+config.embeddingModel = embeddingModel;
+
+const data = EMBEDDING_MODELS[embeddingModel];
+if (!data) {
+  throw new Error(`Unsupported embedding model: ${embeddingModel}`);
+}
+config.dimensions = data.dimensions;
🤖 Prompt for AI Agents
In packages/create-llama/helpers/providers/huggingface.ts around lines 46 to 57,
add a defensive check after the user selects the embeddingModel to verify that
it exists in EMBEDDING_MODELS before accessing its dimensions property. If the
selected embeddingModel is not found, handle the case gracefully by either
prompting the user again or throwing a controlled error to prevent a runtime
crash due to undefined access.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant