Skip to content

Configure Renovate #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Configure Renovate #1

wants to merge 1 commit into from

Conversation

renovate[bot]
Copy link

@renovate renovate bot commented Apr 10, 2025

Welcome to Renovate! This is an onboarding PR to help you understand and configure settings before regular Pull Requests begin.

🚦 To activate Renovate, merge this Pull Request. To disable Renovate, simply close this Pull Request unmerged.


Detected Package Files

  • avatars/hedra/education_avatar/education-frontend/.github/workflows/build-and-test.yaml (github-actions)
  • avatars/hedra/education_avatar/education-frontend/.github/workflows/sync-to-production.yaml (github-actions)
  • avatars/tavus/voice-assistant-frontend/.github/workflows/build-and-test.yaml (github-actions)
  • avatars/tavus/voice-assistant-frontend/.github/workflows/sync-to-production.yaml (github-actions)
  • complex-agents/drive-thru/frontend/.github/workflows/build-and-test.yaml (github-actions)
  • complex-agents/drive-thru/frontend/.github/workflows/sync-to-production.yaml (github-actions)
  • complex-agents/nova-sonic/nova-sonic-form-agent/.github/workflows/build-and-test.yaml (github-actions)
  • complex-agents/nova-sonic/nova-sonic-form-agent/.github/workflows/sync-to-production.yaml (github-actions)
  • complex-agents/nutrition-assistant/nutrition-assistant-frontend/.github/workflows/build-and-test.yaml (github-actions)
  • complex-agents/nutrition-assistant/nutrition-assistant-frontend/.github/workflows/sync-to-production.yaml (github-actions)
  • complex-agents/role-playing/role_playing_frontend/.github/workflows/build-and-test.yaml (github-actions)
  • complex-agents/role-playing/role_playing_frontend/.github/workflows/sync-to-production.yaml (github-actions)
  • complex-agents/shopify-voice-shopper/shopify-voice-frontend/.github/workflows/build-and-test.yaml (github-actions)
  • complex-agents/shopify-voice-shopper/shopify-voice-frontend/.github/workflows/sync-to-production.yaml (github-actions)
  • complex-agents/turn-taking/turn-taking-frontend/.github/workflows/build-and-test.yaml (github-actions)
  • complex-agents/turn-taking/turn-taking-frontend/.github/workflows/sync-to-production.yaml (github-actions)
  • avatars/hedra/education_avatar/education-frontend/package.json (npm)
  • avatars/tavus/voice-assistant-frontend/package.json (npm)
  • complex-agents/drive-thru/frontend/package.json (npm)
  • complex-agents/ivr-agent/ivr-agent-frontend/package.json (npm)
  • complex-agents/nova-sonic/nova-sonic-form-agent/package.json (npm)
  • complex-agents/nutrition-assistant/nutrition-assistant-frontend/package.json (npm)
  • complex-agents/role-playing/role_playing_frontend/package.json (npm)
  • complex-agents/shopify-voice-shopper/shopify-voice-frontend/package.json (npm)
  • complex-agents/teleprompter/teleprompter-frontend/package.json (npm)
  • complex-agents/turn-taking/turn-taking-frontend/package.json (npm)
  • complex-agents/shopify-voice-shopper/requirements.txt (pip_requirements)
  • metrics/send-metrics-to-3p/metrics_server/requirements.txt (pip_requirements)
  • rag/requirements.txt (pip_requirements)
  • requirements.txt (pip_requirements)
  • renovate.json (renovate-config-presets)

Configuration Summary

Based on the default config's presets, Renovate will:

  • Start dependency updates only once this onboarding PR is merged
  • Hopefully safe environment variables to allow users to configure.
  • Show all Merge Confidence badges for pull requests.
  • Enable Renovate Dependency Dashboard creation.
  • Use semantic commit type fix for dependencies and chore for all others if semantic commits are in use.
  • Ignore node_modules, bower_components, vendor and various test/tests (except for nuget) directories.
  • Group known monorepo packages together.
  • Use curated list of recommended non-monorepo package groupings.
  • Show only the Age and Confidence Merge Confidence badges for pull requests.
  • Apply crowd-sourced package replacement rules.
  • Apply crowd-sourced workarounds for known problems with packages.

🔡 Do you want to change how Renovate upgrades your dependencies? Add your custom config to renovate.json in this branch. Renovate will update the Pull Request description the next time it runs.


What to Expect

With your current configuration, Renovate will create 18 Pull Requests:

Update dependency next [SECURITY]
  • Branch name: renovate/npm-next-vulnerability
  • Merge into: main
  • Upgrade next to 14.2.30
  • Upgrade next to 15.2.3
Update dependency requests to v2.32.4 [SECURITY]
  • Branch name: renovate/pypi-requests-vulnerability
  • Merge into: main
  • Upgrade requests to ==2.32.4
Update dependency vite to v6.1.6 [SECURITY]
  • Branch name: renovate/npm-vite-vulnerability
  • Merge into: main
  • Upgrade vite to 6.1.6
Update dependencies (non-major)
Update dependency python-dotenv to v1.1.1
  • Schedule: ["at any time"]
  • Branch name: renovate/python-dotenv-1.x
  • Merge into: main
  • Upgrade python-dotenv to ==1.1.1
Update devDependencies (non-major)
Update actions/checkout action to v4
  • Schedule: ["at any time"]
  • Branch name: renovate/actions-checkout-4.x
  • Merge into: main
  • Upgrade actions/checkout to v4
Update dependency @​types/node to v22
  • Schedule: ["at any time"]
  • Branch name: renovate/node-22.x
  • Merge into: main
  • Upgrade @types/node to ^22.0.0
Update dependency eslint to v9
  • Schedule: ["at any time"]
  • Branch name: renovate/major-eslint-monorepo
  • Merge into: main
  • Upgrade eslint to ^9.0.0
Update dependency eslint-config-next to v15
  • Schedule: ["at any time"]
  • Branch name: renovate/major-nextjs-monorepo
  • Merge into: main
  • Upgrade eslint-config-next to 15.4.4
Update dependency eslint-config-prettier to v10
  • Schedule: ["at any time"]
  • Branch name: renovate/eslint-config-prettier-10.x
  • Merge into: main
  • Upgrade eslint-config-prettier to 10.1.8
Update dependency flask to v3
  • Schedule: ["at any time"]
  • Branch name: renovate/flask-3.x
  • Merge into: main
  • Upgrade flask to ==3.1.1
Update dependency framer-motion to v12
  • Schedule: ["at any time"]
  • Branch name: renovate/framer-motion-12.x
  • Merge into: main
  • Upgrade framer-motion to ^12.0.0
Update dependency globals to v16
  • Schedule: ["at any time"]
  • Branch name: renovate/globals-16.x
  • Merge into: main
  • Upgrade globals to ^16.0.0
Update dependency tailwind-merge to v3
  • Schedule: ["at any time"]
  • Branch name: renovate/tailwind-merge-3.x
  • Merge into: main
  • Upgrade tailwind-merge to ^3.0.0
Update dependency tailwindcss to v4
  • Schedule: ["at any time"]
  • Branch name: renovate/major-tailwindcss-monorepo
  • Merge into: main
  • Upgrade tailwindcss to ^4.0.0
Update pnpm to v10
  • Schedule: ["at any time"]
  • Branch name: renovate/pnpm-10.x
  • Merge into: main
  • Upgrade pnpm to 10.13.1
Update react monorepo to v19 (major)

🚸 Branch creation will be limited to maximum 2 per hour, so it doesn't swamp any CI resources or overwhelm the project. See docs for prhourlylimit for details.


❓ Got questions? Check out Renovate's Docs, particularly the Getting Started section.
If you need any further assistance then you can also request help here.


This PR was generated by Mend Renovate. View the repository job log.

ctxadm added a commit to ctxadm/python-agents-examples that referenced this pull request Jul 29, 2025
# prüfen ob sinnvoll bei weiteren halluzinationen! 

#Modelfile und seine Auswirkungen
#Das Modelfile im Detail:
#dockerfileFROM llama3.2:latest
#PARAMETER temperature 0.0
#PARAMETER top_k 10
#PARAMETER top_p 0.1
#PARAMETER repeat_penalty 1.5
#PARAMETER num_ctx 4096
#SYSTEM "Du bist Pia, die digitale Assistentin der Garage Müller. 
#ANTWORTE NUR AUF DEUTSCH. WICHTIG: Erfinde NIEMALS Informationen. 
#Wenn du unsicher bist, sage 'Ich bin mir nicht sicher'. 
#Basiere deine Antworten IMMER auf den Daten, die dir gegeben werden."
#Wie das Modelfile wirkt:
livekit-examples#1. System-Prompt Integration
D#as SYSTEM-Kommando wird permanent in jede Konversation eingebettet:
#[SYSTEM]: Du bist Pia... Erfinde NIEMALS Informationen...
#[USER]: Meine Fahrzeug-ID ist F004
#[ASSISTANT]: [Antwort basierend auf System-Prompt]
2#. Priorisierung der Anweisungen
#python# Hierarchie der Anweisungen:
livekit-examples#1. Modelfile SYSTEM prompt (höchste Priorität)
livekit-examples#2. Agent instructions im Code
livekit-examples#3. User input

# Das bedeutet:
Modelfile sagt "Erfinde nie" > Agent sagt "Sei kreativ" → Modell erfindet nicht
3. Praktische Auswirkungen
OHNE Modelfile-Optimierung:
User: "Was ist mit meinem Auto?"
LLM: "Ihr BMW 320d hat folgende Probleme..." 
     (Halluzination - erfindet BMW statt Mercedes)
MIT Modelfile-Optimierung:
User: "Was ist mit meinem Auto?"
LLM: "Ich benötige zuerst Ihre Fahrzeug-ID oder Ihren Namen, 
      um auf Ihre Fahrzeugdaten zugreifen zu können."
     (Keine Erfindung)
4. Zusammenspiel von Code und Modelfile
python# Im Code (Agent Instructions):
instructions="""You are Pia... NEVER invent data..."""

# Im Modelfile (System Prompt):
SYSTEM "Du bist Pia... Erfinde NIEMALS Informationen..."

# Ergebnis: Doppelte Verstärkung!
Das Modelfile wirkt wie ein Sicherheitsnetz:

Selbst wenn die Agent-Instructions versagen
Greift immer noch das Modelfile-System-Prompt
Reduziert Halluzinationen auf mehreren Ebenen

5. Performance-Vergleich
ParameterStandard Llama 3.2Optimiertes ModelfileEffektHalluzinationsrate15-20%3-5%-75% FehlerAntwortzeit1-2s0.8-1.5sSchneller durch begrenzte TokenKonsistenzVariabelSehr hochGleiche Fragen = Gleiche AntwortenNatürlichkeitHochMittelTrade-off für Genauigkeit
6. Best Practice Empfehlung
Erstellen Sie verschiedene Modelfiles für verschiedene Szenarien:
bash# Für kritische Kundendaten (höchste Genauigkeit)
ollama create llama3.2-garage-critical -f Modelfile-critical

# Für allgemeine Konversation (balanciert)
ollama create llama3.2-garage-balanced -f Modelfile-balanced

# Für Smalltalk (mehr Natürlichkeit)
ollama create llama3.2-garage-casual -f Modelfile-casual
Dann im Code:
python# Je nach Konversationsphase
if context.conversation_state == ConversationState.SEARCHING:
    model = "llama3.2-garage-critical"  # Höchste Genauigkeit
else:
    model = "llama3.2-garage-balanced"  # Normale Konversation
Diese Optimierungen reduzieren Halluzinationen drastisch, während sie die Funktionalität des Agents erhalten!WiederholenKwo muss das modelfile abgelegt werden?BearbeitenIch erkläre Ihnen, wie Sie das Modelfile mit Ollama verwenden:
📁 Modelfile Ablage und Verwendung
1. Modelfile ist NICHT zum Ablegen gedacht
Das Modelfile wird nicht in einem bestimmten Ordner abgelegt, sondern direkt mit Ollama verwendet:
bash# Schritt 1: Modelfile erstellen (temporär)
cat > Modelfile << 'EOF'
FROM llama3.2:latest
PARAMETER temperature 0.0
PARAMETER top_k 10
PARAMETER top_p 0.1
PARAMETER repeat_penalty 1.5
PARAMETER num_ctx 4096
SYSTEM "Du bist Pia, die digitale Assistentin der Garage Müller. ANTWORTE NUR AUF DEUTSCH. WICHTIG: Erfinde NIEMALS Informationen. Wenn du unsicher bist, sage 'Ich bin mir nicht sicher'. Basiere deine Antworten IMMER auf den Daten, die dir gegeben werden."
EOF

# Schritt 2: Modell mit Ollama erstellen
ollama create llama3.2-garage -f Modelfile

# Schritt 3: Modelfile kann gelöscht werden (optional)
rm Modelfile
2. Wo Ollama die Modelle speichert
Ollama speichert die erstellten Modelle hier:
Linux/WSL:
bash~/.ollama/models/
# Vollständiger Pfad: /home/username/.ollama/models/
macOS:
bash~/.ollama/models/
# Vollständiger Pfad: /Users/username/.ollama/models/
Windows:
C:\Users\%USERNAME%\.ollama\models\
Docker:
bash/root/.ollama/models/  # Im Container
# Oder gemountet: ./ollama-data:/root/.ollama
3. Verwendung im Agent-Code
Nach dem Erstellen ändern Sie den Code:
python# Vorher:
llm = openai.LLM.with_ollama(
    model="llama3.2:latest",  # Standard-Modell
    base_url=os.getenv("OLLAMA_URL", "http://172.16.0.146:11434/v1"),
    ...
)

# Nachher:
llm = openai.LLM.with_ollama(
    model="llama3.2-garage",  # Ihr angepasstes Modell
    base_url=os.getenv("OLLAMA_URL", "http://172.16.0.146:11434/v1"),
    ...
)
4. Praktisches Beispiel - Vollständiger Workflow
bash# 1. SSH auf Ihren Ollama-Server
ssh [email protected]

# 2. Modelfile erstellen
nano garage-modelfile.txt

# 3. Inhalt einfügen:
FROM llama3.2:latest
PARAMETER temperature 0.0
PARAMETER top_k 10
PARAMETER top_p 0.1
PARAMETER repeat_penalty 1.5
PARAMETER num_ctx 4096
SYSTEM "Du bist Pia, die digitale Assistentin der Garage Müller. ANTWORTE NUR AUF DEUTSCH. WICHTIG: Erfinde NIEMALS Informationen. Wenn du unsicher bist, sage 'Ich bin mir nicht sicher'. Basiere deine Antworten IMMER auf den Daten, die dir gegeben werden."

# 4. Modell erstellen
ollama create llama3.2-garage -f garage-modelfile.txt

# 5. Verfügbare Modelle prüfen
ollama list

# Ausgabe sollte zeigen:
# NAME                    ID              SIZE    MODIFIED
# llama3.2-garage        abc123def456    2.0 GB  2 minutes ago
# llama3.2:latest        xyz789ghi012    2.0 GB  1 week ago

# 6. Modell testen
ollama run llama3.2-garage "Hallo, wer bist du?"

# 7. Modelfile löschen (optional)
rm garage-modelfile.txt
5. Docker-Compose Setup
Wenn Sie Ollama in Docker verwenden:
yamlversion: '3.8'

services:
  ollama:
    image: ollama/ollama:latest
    volumes:
      - ./ollama-data:/root/.ollama
      - ./modelfiles:/modelfiles  # Für Modelfiles
    ports:
      - "11434:11434"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

  # Modell-Initialisierung
  ollama-setup:
    image: ollama/ollama:latest
    depends_on:
      - ollama
    volumes:
      - ./modelfiles:/modelfiles
    command: |
      sh -c "
        sleep 10
        ollama create llama3.2-garage -f /modelfiles/garage.modelfile
        echo 'Model created successfully'
      "
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants