Skip to content

Feature/ai service/refine evalaution process: add streamlit app for eval analysis #36

Feature/ai service/refine evalaution process: add streamlit app for eval analysis

Feature/ai service/refine evalaution process: add streamlit app for eval analysis #36

Workflow file for this run

# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python
name: AI Service CI
on:
push:
branches: [ main ]
pull_request:
types: [ labeled ]
permissions:
contents: read
concurrency:
# avoid mis-canceling the ci runs while other labels are added to the PR, so we add the label name as the condition
group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.event.label.name == 'ci/ai-service' && github.event.number || github.sha }}
cancel-in-progress: true
defaults:
run:
working-directory: wren-ai-service
jobs:
ci:
if: ${{ github.event.label.name == 'ci/ai-service' || github.event_name == 'push' && github.ref == 'refs/heads/main' }}
strategy:
fail-fast: false
matrix:
python-version: [ "3.12" ]
poetry-version: [ "1.7.1" ]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.12
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Run image
uses: abatilo/actions-poetry@v2
with:
poetry-version: ${{ matrix.poetry-version }}
- name: Install the project dependencies
run: poetry install
- name: Run Qdrant
run: docker run -p 6333:6333 -p 6334:6334 -d --name qdrant qdrant/qdrant:v1.7.4
- name: Test with pytest
run: poetry run pytest
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ENV: dev