Skip to content

Commit

Permalink
Merge branch 'release/v2.1.11' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions committed Nov 8, 2024
2 parents 3825b4b + c161fdc commit 6ed6253
Show file tree
Hide file tree
Showing 89 changed files with 3,198 additions and 325 deletions.
17 changes: 11 additions & 6 deletions .check-schema/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,17 @@

This project compares the JSON Schema from the Zilla to the [Reference](../src/reference) section of the docs.

## Update schema
## Generate a the schema

You can generate the Zilla schema on startup by using the `zilla start` command with the `-Pzilla.engine.verbose.schema.plain` option. The `schema.plain` option is needed because we don't need to check the schema with the extra string validation options that get injected.

```bash
zilla start -v -Pzilla.engine.verbose.schema.plain
```

### Update schema from Docker

You can generate the schema from the docker image and pull it from the logs. Then just remove the none JSON lines from the beginning and end of each file.

In the repository root directory run:

Expand All @@ -18,9 +28,4 @@ docker stop $CONTAINER_ID;

gsed -i '1,2d' ./.check-schema/zilla-schema.json;
gsed -i '$d' ./.check-schema/zilla-schema.json;

```

Once the docker container has printed "started" it must be deleted for the command to complete.

Remove the none JSON lines from the beginning and end of each file.
400 changes: 390 additions & 10 deletions .check-schema/zilla-schema.json

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ EDAs
enSidebar
enum
etag
Fargate
fas
gitea
Grafana
Expand Down
File renamed without changes.
27 changes: 27 additions & 0 deletions .github/workflows/cookbook_artifacts.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
name: Release Cookbook Artifacts

on:
workflow_dispatch:
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+'

permissions:
contents: write

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Tar all cookbooks
run: for i in src/cookbooks/*/; do tar -zcvf "${i%/}.tar.gz" "$i"; done

- name: Release
uses: softprops/action-gh-release@v2
if: startsWith(github.ref, 'refs/tags/')
with:
files: |
src/cookbooks/*.tar.gz
LICENSE
4 changes: 2 additions & 2 deletions .github/workflows/gitflow-release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ jobs:

- if: ${{ env.IS_CUSTOM_VERSION && steps.validate-custom-version.outputs.match != github.event.inputs.custom-version }}
name: Custom Version must be "#.#.#"
run: echo "Custom Version must be #.#.#" exit 1;
run: echo "Custom Version must be \#.\#.\#" exit 1;

release:
runs-on: ubuntu-latest
Expand All @@ -51,7 +51,7 @@ jobs:
steps:
- uses: actions/checkout@v4
with:
token: ${{secrets.GITFLOW_RELEASES_TOKEN}}
token: ${{secrets.GITFLOW_RELEASES_TOKEN}} # The PAT used to push changes to protected branches
fetch-depth: "0"

- name: get new version
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@ dist
src/.vuepress/.cache/
src/.vuepress/.temp/
.idea/
src/cookbooks/quickstart/live-demo-deploy/.env
5 changes: 1 addition & 4 deletions .lycheeignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,10 @@ fonts.googleapis.com
fonts.gstatic.com
github.com/.+/zilla-docs/edit
docs.aklivity.io
https://quickstart.aklivity.io
www.linkedin.com/company/aklivity
www.twitter.com/aklivityinc
.+algolia.net
amazonaws.com
hub.docker.com
.+\.name

# These should be removed once the docsearch plugin is fixed
.+/assets/style.+
.+/assets/docsearch.+
40 changes: 14 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,12 @@ pnpm i
pnpm dev
```

### Search

This projects uses the free [Algolia Docsearch](https://docsearch.algolia.com/) tool to index the production site for all of the public versions. The crawler config is based on the recommended config from the [Docsearch Plugin](https://ecosystem.vuejs.press/plugins/search/docsearch.html).

To see the current search index or crawler config login to the [Algolia Dashboard](https://dashboard.algolia.com/users/sign_in).

### Running Lints

- Markdown linting
Expand Down Expand Up @@ -78,6 +84,14 @@ pnpm dev
pnpm link-checker && lychee --exclude-mail --base="src/.vuepress/dist" src/.vuepress/dist
```

- Schema checking

You can automatically check the reference docs against the Zilla json schema. More instructions are in the [.check-schema/README.md]()

```bash
pnpm check-schema > schema-edits.txt
```

### Reference docs Structure

Pages in the reference section describe, as briefly as possible and in an orderly way, the properties and interface of a feature.
Expand Down Expand Up @@ -114,20 +128,6 @@ parentArray:

## Section

:::: note ToC

- [topLevelProp\*](#toplevelprop)
- [topLevelProp.child\*](#toplevelprop-child)
- [array](#array)
- [parentArray](#parentarray)
- [parentArray\[\].child](#parentarray-child)

::: right
\* required
:::

::::

### topLevelProp\*

> `object`
Expand Down Expand Up @@ -176,18 +176,6 @@ parentArray:
Description.
````

### Generate schema asset

capture the output and delete the first and last lines

```bash
docker run -it --rm -e ZILLA_INCUBATOR_ENABLED=true ghcr.io/aklivity/zilla:latest start -v -Pzilla.engine.verbose.schema > src/.vuepress/public/assets/zilla-schema.json
```

```bash
pnpm check-schema > schema-edits.txt
```

## Provide feedback

We’d love to hear your feedback. Please file documentation issues only in the docs GitHub repository. You can file a new issue to suggest improvements or if you see any errors in the existing documentation.
Expand Down
2 changes: 1 addition & 1 deletion deploy-versions.json
Original file line number Diff line number Diff line change
@@ -1 +1 @@
[{"text":"Latest","icon":"fas fa-home","key":"latest","tag":"v2.1.10"}]
[{"text":"Latest","icon":"fas fa-home","key":"latest","tag":"v2.1.11"}]
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "zilla-docs",
"type": "module",
"version": "2.1.10",
"version": "2.1.11",
"description": "The official documentation for the aklivity/zilla open-source project",
"keywords": [],
"author": "aklivity.io",
Expand Down
6 changes: 5 additions & 1 deletion src/.vuepress/sidebar/en.ts
Original file line number Diff line number Diff line change
Expand Up @@ -363,7 +363,7 @@ export const enSidebar = sidebar({
},
{
text: "Installing Zilla",
link: "how-tos/deploy-operate.md",
link: "how-tos/deploy-operate/index.md",
children: [],
},
{
Expand All @@ -383,6 +383,10 @@ export const enSidebar = sidebar({
text: "Push to an OTLP Collector",
link: "how-tos/telemetry/opentelemetry-protocol.md",
},
{
text: "Auto scaling on K8s",
link: "how-tos/deploy-operate/autoscale-k8s.md",
},
],
},
{
Expand Down
4 changes: 2 additions & 2 deletions src/concepts/kafka-proxies/mqtt-proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,11 +70,11 @@ An MQTT client can Publish messages to any configured Kafka topics, marking spec

### Session Management

MQTT connect, disconnect, and other session messages are maintained on the the log compacted [sessions](../../reference/config/bindings/mqtt-kafka/proxy.md#topics-sessions) Kafka topic. A message keyed by the MQTT client ID on the topic is used to track client subscriptions across client reconnects.
MQTT connect, disconnect, and other session messages are maintained on the log compacted [sessions](../../reference/config/bindings/mqtt-kafka/proxy.md#topics-sessions) Kafka topic. A message keyed by the MQTT client ID on the topic is used to track client subscriptions across client reconnects.

#### Kafka Consumer Groups for MQTT sessions

A consumer group is created for each unique client ID used by an MQTT session with the format `zilla:<zilla namespace>-<binding name>-<MQTT client ID>`. Zilla minimizes the number of hearbeats required to approximately one per MQTT session expiry interval. When an MQTT session expires, perhaps because the MQTT client abruptly disconnected but did not reconnect, the corresponding consumer group also expires and the associated tracking state in the [sessions](../../reference/config/bindings/mqtt-kafka/proxy.md#topics-sessions) Kafka topic is cleaned up automatically.
A consumer group is created for each unique client ID used by an MQTT session with the format `zilla:<zilla namespace>-<binding name>-<MQTT client ID>`. Zilla minimizes the number of heartbeats required to approximately one per MQTT session expiry interval. When an MQTT session expires, perhaps because the MQTT client abruptly disconnected but did not reconnect, the corresponding consumer group also expires and the associated tracking state in the [sessions](../../reference/config/bindings/mqtt-kafka/proxy.md#topics-sessions) Kafka topic is cleaned up automatically.

## Authorizing clients

Expand Down
75 changes: 75 additions & 0 deletions src/cookbooks/http.kafka.sasl.scram/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# http.kafka.sasl.scram

Listens on http port `7114` or https port `7143` and will produce messages to the `events` topic in `SASL/SCRAM` enabled Kafka, synchronously.

## Running locally

This cookbook runs using Docker compose.

### Setup

The `setup.sh` script will:

- installs Zilla, Kafka and Zookeeper to the Kubernetes cluster with helm and waits for the pods to start up
- creates the `events` topic in Kafka
- creates SCRAM credential `user` (the default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in ZooKeeper)
- starts port forwarding

```bash
./setup.sh
```

### Verify behavior

Send a `POST` request with an event body.

```bash
curl -v \
-X "POST" http://localhost:7114/events \
-H "Content-Type: application/json" \
-d "{\"greeting\":\"Hello, world\"}"
```

output:

```text
...
> POST /events HTTP/1.1
> Content-Type: application/json
...
< HTTP/1.1 204 No Content
```

Verify that the event has been produced to the `events` Kafka topic.

```bash
docker compose -p zilla-http-kafka-sync exec kafkacat \
kafkacat -C -b kafka:9092 -t events -J -u | jq .
```

output:

```json
{
"topic": "events",
"partition": 0,
"offset": 0,
"tstype": "create",
"ts": 1652465273281,
"broker": 1001,
"headers": [
"content-type",
"application/json"
],
"payload": "{\"greeting\":\"Hello, world\"}"
}
% Reached end of topic events [0] at offset 1
```

### Teardown

The `teardown.sh` script stops port forwarding, uninstalls Zilla, Kafka and Zookeeper and deletes the namespace.

```bash
./teardown.sh
```
88 changes: 88 additions & 0 deletions src/cookbooks/http.kafka.sasl.scram/compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
name: ${NAMESPACE:-zilla-http-kafka-sasl-scram}
services:
zilla:
image: ghcr.io/aklivity/zilla:${ZILLA_VERSION:-latest}
pull_policy: always
restart: unless-stopped
ports:
- 7114:7114
healthcheck:
interval: 5s
timeout: 3s
retries: 5
test: ["CMD", "bash", "-c", "echo -n '' > /dev/tcp/127.0.0.1/7114"]
environment:
KAFKA_BOOTSTRAP_SERVER: kafka:29092
SASL_USERNAME: user
SASL_PASSWORD: bitnami
volumes:
- ./zilla.yaml:/etc/zilla/zilla.yaml
command: start -v -e

kafka:
image: bitnami/kafka:3.5
restart: unless-stopped
ports:
- 9092:9092
healthcheck:
test: /opt/bitnami/kafka/bin/kafka-cluster.sh cluster-id --bootstrap-server kafka:29092 || exit 1
interval: 1s
timeout: 60s
retries: 60
environment:
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_CFG_NODE_ID: "1"
KAFKA_CFG_BROKER_ID: "1"
KAFKA_CFG_GROUP_INITIAL_REBALANCE_DELAY_MS: "0"
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: "[email protected]:9093"
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: "CLIENT:PLAINTEXT,INTERNAL:SASL_PLAINTEXT,DOCKER:PLAINTEXT,CONTROLLER:PLAINTEXT"
KAFKA_CFG_CONTROLLER_LISTENER_NAMES: "CONTROLLER"
KAFKA_CFG_LOG_DIRS: "/tmp/logs"
KAFKA_CFG_PROCESS_ROLES: "broker,controller"
KAFKA_CFG_LISTENERS: "CLIENT://:9092,INTERNAL://:29092,CONTROLLER://:9093"
KAFKA_CFG_INTER_BROKER_LISTENER_NAME: "INTERNAL"
KAFKA_CFG_ADVERTISED_LISTENERS: "CLIENT://localhost:9092,INTERNAL://kafka:29092"
KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true"

kafka-init:
image: bitnami/kafka:3.5
user: root
depends_on:
kafka:
condition: service_healthy
restart: true
deploy:
restart_policy:
condition: none
max_attempts: 0
entrypoint: ["/bin/sh", "-c"]
command:
- |
echo -e "Creating kafka topic";
/opt/bitnami/kafka/bin/kafka-topics.sh --bootstrap-server kafka:29092 --create --if-not-exists --topic items-requests
/opt/bitnami/kafka/bin/kafka-topics.sh --bootstrap-server kafka:29092 --create --if-not-exists --topic items-responses --config cleanup.policy=compact
echo -e "Successfully created the following topics:";
/opt/bitnami/kafka/bin/kafka-topics.sh --bootstrap-server kafka:29092 --list;
kafka-ui:
image: ghcr.io/kafbat/kafka-ui:latest
restart: unless-stopped
ports:
- 8080:8080
depends_on:
kafka:
condition: service_healthy
restart: true
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092

kafkacat:
image: confluentinc/cp-kafkacat:7.1.9
command: "bash"
stdin_open: true
tty: true

networks:
default:
driver: bridge
5 changes: 5 additions & 0 deletions src/cookbooks/http.kafka.sasl.scram/kafka_jaas.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="user"
password="bitnami";
};
9 changes: 9 additions & 0 deletions src/cookbooks/http.kafka.sasl.scram/setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/bin/sh
set -e

# Start or restart Zilla
if [ -z "$(docker compose ps -q zilla)" ]; then
docker compose up -d
else
docker compose up -d --force-recreate --no-deps zilla
fi
Loading

0 comments on commit 6ed6253

Please sign in to comment.