Skip to content

Commit

Permalink
Merge branch 'integration-test-fix' of github.com:canonical/test_obse…
Browse files Browse the repository at this point in the history
…rver into integration-test-fix
  • Loading branch information
mz2 committed Jul 2, 2023
2 parents a3a5404 + 8417af7 commit e90849c
Show file tree
Hide file tree
Showing 54 changed files with 1,668 additions and 731 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/test_backend.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,9 @@ jobs:
with:
poetry-version: "1.5.1"
- run: poetry install
- run: poetry run black --check test_observer tests migrations/versions
- run: poetry run ruff test_observer tests migrations/versions
- run: poetry run mypy test_observer tests migrations/versions
- run: poetry run black --check test_observer tests migrations scripts
- run: poetry run ruff test_observer tests migrations scripts
- run: poetry run mypy --explicit-package-bases test_observer tests migrations scripts
- run: poetry run pytest
env:
TEST_DB_URL: postgresql+pg8000://postgres:password@localhost:5432/postgres
22 changes: 22 additions & 0 deletions .github/workflows/test_frontend.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
name: Test Frontend
on: [push]
# Cancel inprogress runs if new commit pushed
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
defaults:
run:
working-directory: frontend
steps:
- uses: actions/checkout@v3
- uses: subosito/flutter-action@v2
with:
channel: 'stable'
- run: flutter pub get
- run: flutter pub run build_runner build
- run: flutter analyze
- run: flutter test
- run: flutter build web
18 changes: 12 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,19 @@ Observe the status and state of certification tests for various artefacts

- `juju` 3.1 or later (`sudo snap install juju --channel=3.1/stable`)
- `microk8s` 1.27 or later (`sudo snap install microk8s --channel=1.27-strict/stable`) + [permission setup steps after install](https://juju.is/docs/sdk/set-up-your-development-environment#heading--install-microk8s)
- `terraform` 1.4.6 or later (`sudo snap install terraform`)
- `lxd` 5.13 or later (`sudo snap install lxc --channel=5.0/stable`) + `lxd init` after install.
- `charmcraft` 2.3.0 or later (`sudo snap install charmcraft --channel=2.x/stable`)
- `terraform` 1.4.6 or later (`sudo snap install terraform --classic`)
- `lxd` 5.13 or later (`sudo snap install lxd --channel=5.13/stable` or `sudo snap refresh lxd --channel=5.13/stable` if already installed) + `lxd init --auto` after install.
- `charmcraft` 2.3.0 or later (`sudo snap install charmcraft --channel=2.x/stable --classic`)
- optional: `jhack` for all kinds of handy Juju and charm SDK development and debugging operations (`sudo snap install jhack`)

## Deploying a copy of the system with terraform / juju in microk8s

Workaround for juju bug https://bugs.launchpad.net/juju/+bug/1988355

```
mkdir -p ~/.local/share
```

Fist configure microk8s with the needed extensions:

```
Expand All @@ -22,7 +28,7 @@ sudo microk8s enable dns hostpath-storage metallb traefik # metallb setup involv

Then help microk8s work with an authorized (private) OCI image registry at ghcr.io:

1. Get a GitHub personal access token at https://github.com/settings/tokens/new with the `package:read` permission.
1. Get a GitHub personal access token at https://github.com/settings/tokens/new with the `read:packages` permission.
2. Configure containerd in microk8s with the auth credentials needed to pull images from non-default, authorisation requiring OCI registries by appending the following to `/var/snap/microk8s/current/args/containerd-template.toml`:

```yaml
Expand Down Expand Up @@ -77,7 +83,7 @@ You can optionally get SSL certificates automatically managed for the ingress (i
TF_VAR_environment=development TF_VAR_external_ingress_hostname="mah-domain.com" TF_VAR_cloudflare_acme=true TF_VAR_cloudflare_dns_api_token=... TF_VAR_cloudflare_zone_read_api_token=... TF_VAR_cloudflare_email=... terraform apply -auto-approve
```

After all is up, `juju status --relations` should give you output to the direction of the following (the acme-operator only there if `TF_VAR_cloudflare_acme` was passed in):
After all is up, you can run `juju switch test-observer-development` to use the development juju model. Then `juju status --relations` should give you output to the direction of the following (the acme-operator only there if `TF_VAR_cloudflare_acme` was passed in):

```bash
$ juju status --relations
Expand Down Expand Up @@ -130,7 +136,7 @@ charmcraft pack
juju refresh test-observer-api --path ./test-observer-api_ubuntu-22.04-amd64.charm

# to update the OCI image that runs the backend
juju attach-resource test-observer-api --resource api-image=ghcr.io/canonical/test_observer/backend:[tag or sha]
juju attach-resource test-observer-api api-image=ghcr.io/canonical/test_observer/backend:[tag or sha]
```

### Build and refresh the frontend charm
Expand Down
2 changes: 2 additions & 0 deletions backend/.dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,8 @@ var/
.installed.cfg
*.egg
.mypy_cache
.pytest_cache
.ruff_cache

# PyInstaller
# Usually these files are written by a python script from a template
Expand Down
6 changes: 5 additions & 1 deletion backend/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,10 @@ Linting is done using ruff, formatting using black and type checking using mypy.

Assuming that your microk8s cluster is running, you can start the development environment by simply running `$ skaffold dev`. This command will build the docker images and push them to your microk8s registry, then apply your k8s manifest to start the cluster and pull those images. Additionally, skaffold will watch for file changes and either sync them directly inside the running containers or rebuild and redeploy k8s cluster for you automatically.

### 5. [Optional] seed the database

Run `scripts/seed_data.py` script to seed the database with some dummy data. This can be useful when working with front-end.

## Dependency Management

### Add/Install dependency
Expand All @@ -55,7 +59,7 @@ For more information on how to create migrations, please check [Alembic docs](ht

Note however that this created migration will reside inside the pod not on your host machine. So you have to copy it over by:

`$ kubectl cp test-observer-api-RESTOFPODNAME:/home/app/alembic/versions ./alembic/versions`
`$ kubectl cp test-observer-api-RESTOFPODNAME:/home/app/migrations/versions ./migrations/versions`

You can get RESTOFPODNAME by running

Expand Down
2 changes: 0 additions & 2 deletions backend/alembic.ini
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@ timezone = UTC
# version path separator
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.

sqlalchemy.url = postgresql+pg8000://postgres:password@test-observer-db:5432/postgres

[loggers]
keys = root,sqlalchemy,alembic

Expand Down
2 changes: 1 addition & 1 deletion backend/charm/metadata.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ resources:
api-image:
type: oci-image
description: OCI image from GitHub Container Repository
upstream-source: ghcr.io/canonical/test_observer/api:v0.0.9
upstream-source: ghcr.io/canonical/test_observer/api:main
13 changes: 9 additions & 4 deletions backend/charm/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,11 @@ target-version = ["py38"]
[tool.ruff]
line-length = 99
select = ["E", "W", "F", "C", "N", "D", "I001"]
extend-ignore = [
ignore = [
"D100",
"D102",
"E501",
"D107",
"D203",
"D204",
"D213",
Expand All @@ -31,12 +35,13 @@ extend-ignore = [
"D409",
"D413",
]
ignore = ["E501", "D107"]
extend-exclude = ["__pycache__", "*.egg_info"]
per-file-ignores = {"tests/*" = ["D100","D101","D102","D103","D104"]}
extend-exclude = ["__pycache__", "*.egg_info", "lib"]

[tool.ruff.mccabe]
max-complexity = 10

[tool.mypy]
exclude = ["lib"]

[tool.codespell]
skip = "build,lib,venv,icon.svg,.tox,.git,.mypy_cache,.ruff_cache,.coverage"
65 changes: 32 additions & 33 deletions backend/charm/src/charm.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,8 @@ def __init__(self, *args):
self.database = DatabaseRequires(
self, relation_name="database", database_name="test_observer_db"
)
self.framework.observe(
self.database.on.database_created, self._on_database_changed
)
self.framework.observe(
self.database.on.endpoints_changed, self._on_database_changed
)
self.framework.observe(self.database.on.database_created, self._on_database_changed)
self.framework.observe(self.database.on.endpoints_changed, self._on_database_changed)
self.framework.observe(
self.database.on.database_relation_broken,
self._on_database_relation_broken,
Expand All @@ -56,31 +52,46 @@ def __init__(self, *args):
self._test_observer_rest_api_client_changed,
)

self.framework.observe(self.on.upgrade_charm, self._on_upgrade_charm)

self.ingress = IngressPerAppRequirer(
self, host=self.config["hostname"], port=self.config["port"]
)
self.framework.observe(self.on.migrate_database_action, self._migrate_database)

def _migrate_database(self, event):
def _on_upgrade_charm(self, event):
self._migrate_database()

def _migrate_database(self):
# only leader runs database migrations
if not self.unit.is_leader():
raise SystemExit(0)

if not self.container.can_connect():
self.unit.status = WaitingStatus("Waiting for Pebble for API")
raise SystemExit(0)

self.unit.status = MaintenanceStatus("Migrating database")

process = self.container.exec(
["alembic", "upgrade", "head"],
working_dir="./backend",
timeout=None,
working_dir="/home/app",
environment=self._postgres_relation_data(),
)
stdout, stderr = process.wait_output()

for line in stdout.splitlines():
logger.info(line.strip())

logger.info(stdout)
if stderr:
for line in stderr.splitlines():
logger.error(line.strip())
logger.error(stderr)

self.unit.status = ActiveStatus()

def _on_database_changed(
self,
event: DatabaseCreatedEvent | DatabaseEndpointsChangedEvent,
):
logger.info("Database changed event: %s", event)
if isinstance(event, DatabaseCreatedEvent):
self._migrate_database()
self._update_layer_and_restart(None)

def _on_database_relation_broken(self, event):
Expand All @@ -96,14 +107,10 @@ def _on_config_changed(self, event):
self._update_layer_and_restart(event)

def _update_layer_and_restart(self, event):
self.unit.status = MaintenanceStatus(
f"Updating {self.pebble_service_name} layer"
)
self.unit.status = MaintenanceStatus(f"Updating {self.pebble_service_name} layer")

if self.container.can_connect():
self.container.add_layer(
self.pebble_service_name, self._pebble_layer, combine=True
)
self.container.add_layer(self.pebble_service_name, self._pebble_layer, combine=True)
self.container.restart(self.pebble_service_name)
self.unit.set_workload_version(self.version)
self.unit.status = ActiveStatus()
Expand All @@ -127,13 +134,9 @@ def _postgres_relation_data(self) -> dict:
def _test_observer_rest_api_client_joined(self, event: RelationJoinedEvent) -> None:
logger.info(f"Test Observer REST API client joined {event}")

def _test_observer_rest_api_client_changed(
self, event: RelationChangedEvent
) -> None:
def _test_observer_rest_api_client_changed(self, event: RelationChangedEvent) -> None:
if self.unit.is_leader():
logger.debug(
f"Setting hostname in data bag for {self.app}: {self.config['hostname']}"
)
logger.debug(f"Setting hostname in data bag for {self.app}: {self.config['hostname']}")
event.relation.data[self.app].update(
{
"hostname": self.config["hostname"],
Expand All @@ -143,15 +146,11 @@ def _test_observer_rest_api_client_changed(

@property
def version(self) -> str | None:
if self.container.can_connect() and self.container.get_services(
self.pebble_service_name
):
if self.container.can_connect() and self.container.get_services(self.pebble_service_name):
# 0.0.0.0 instead of config['hostname'] intentional:
# request made from pebble's container to api in the same unit (same pod).
try:
return get(f"http://0.0.0.0:{self.config['port']}/v1/version").json()[
"version"
]
return get(f"http://0.0.0.0:{self.config['port']}/v1/version").json()["version"]
except Exception as e:
logger.warning(f"Failed to get version: {e}")
logger.exception(e)
Expand Down
20 changes: 12 additions & 8 deletions backend/migrations/env.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,25 @@
from logging.config import fileConfig

from sqlalchemy import engine_from_config
from sqlalchemy import pool
from sqlalchemy import engine_from_config, pool

from alembic import context

from test_observer.data_access import Base

from test_observer.data_access.setup import DB_URL

# for 'autogenerate' support
target_metadata = Base.metadata

config = context.config

if config.config_file_name is not None:
fileConfig(config.config_file_name)

# add your model's MetaData object here
# for 'autogenerate' support
from test_observer.data_access import Base

target_metadata = Base.metadata
# Don't overwrite value if set by tests
if config.get_main_option("sqlalchemy.url") is None:
config.set_main_option("sqlalchemy.url", DB_URL)


def run_migrations_offline() -> None:
Expand All @@ -29,9 +34,8 @@ def run_migrations_offline() -> None:
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
url=config.get_main_option("sqlalchemy.url"),
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
"""Use nulls not distinct
Revision ID: b33ee8dd41b1
Revises: 6a80dad01d24
Create Date: 2023-06-20 10:00:07.676129+00:00
"""
from alembic import op


# revision identifiers, used by Alembic.
revision = "b33ee8dd41b1"
down_revision = "6a80dad01d24"
branch_labels = None
depends_on = None


def upgrade() -> None:
op.execute(
"ALTER TABLE artefact_build "
"DROP CONSTRAINT IF EXISTS artefact_build_artefact_id_architecture_revision_key"
", ADD CONSTRAINT unique_artefact_build "
"UNIQUE NULLS NOT DISTINCT (artefact_id, architecture, revision);"
)


def downgrade() -> None:
op.execute(
"ALTER TABLE artefact_build "
"DROP CONSTRAINT IF EXISTS unique_artefact_build"
", ADD CONSTRAINT artefact_build_artefact_id_architecture_revision_key "
"UNIQUE (artefact_id, architecture, revision);"
)
Loading

0 comments on commit e90849c

Please sign in to comment.