Skip to content

Commit

Permalink
Merge branch 'release/v0.7.4' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions committed Dec 30, 2023
2 parents eab55a9 + 92d793c commit abe2620
Show file tree
Hide file tree
Showing 29 changed files with 1,260 additions and 1,387 deletions.
1 change: 1 addition & 0 deletions .lycheeignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,5 @@ www.linkedin.com/company/aklivity
www.twitter.com/aklivityinc
.+algolia.net
amazonaws.com
hub.docker.com
.+\.name
2 changes: 1 addition & 1 deletion .vale.ini
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Vocab = Base
Packages = write-good


[*.{js,ts}]
[src/.vuepress/sidebar/en.ts]
BasedOnStyles = Vale, docs

[*.{md,mdx}]
Expand Down
2 changes: 1 addition & 1 deletion deploy-versions.json
Original file line number Diff line number Diff line change
@@ -1 +1 @@
[{"text":"Latest","icon":"fas fa-home","key":"latest","tag":"v0.7.3"}]
[{"text":"Latest","icon":"fas fa-home","key":"latest","tag":"v0.7.4"}]
10 changes: 5 additions & 5 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "zilla-docs",
"version": "0.7.3",
"version": "0.7.4",
"description": "The official documentation for the aklivity/zilla open-source project",
"keywords": [],
"author": "aklivity.io",
Expand All @@ -15,8 +15,8 @@
"clean-dev": "vuepress dev src --clean-cache --debug",
"dev": "vuepress dev src",
"update-package": "pnpm dlx vp-update",
"lint": "markdownlint-cli2 \"**/*.md\" \"#node_modules\" \"#.config\"",
"lint-fix": "markdownlint-cli2 \"**/*.md\" \"#node_modules\" \"#.config\"",
"lint": "markdownlint-cli2 \"**/*.md\" \"#node_modules\" \"#.config\" \"#.check-schema\"",
"lint-fix": "markdownlint-cli2 \"**/*.md\" \"#node_modules\" \"#.config\" \"#.check-schema\"",
"link-checker": "pnpm build && link-checker src/.vuepress/dist"
},
"devDependencies": {
Expand All @@ -28,8 +28,8 @@
"link-checker": "^1.4.2",
"markdownlint-cli2": "^0.8.1",
"mathjax-full": "^3.2.2",
"vue": "^3.3.9",
"vue": "^3.3.11",
"vuepress": "2.0.0-rc.0",
"vuepress-theme-hope": "2.0.0-rc.1"
"vuepress-theme-hope": "2.0.0-rc.4"
}
}
1,840 changes: 921 additions & 919 deletions pnpm-lock.yaml

Large diffs are not rendered by default.

41 changes: 8 additions & 33 deletions src/.vuepress/sidebar/en.ts
Original file line number Diff line number Diff line change
Expand Up @@ -166,61 +166,36 @@ export const enSidebar = sidebar({
},
{
text: "Kafka Proxying",
link: "concepts/kafka-proxies/rest-proxy.md",
link: "concepts/kafka-proxies/http-proxy.md",
children: [
{
text: "REST Kafka Proxy",
text: "HTTP Kafka Proxy",
collapsible: true,
link: "concepts/kafka-proxies/rest-proxy.md",
link: "concepts/kafka-proxies/http-proxy.md",
children: [
{
text: "Overview",
link: "concepts/kafka-proxies/rest-proxy.md",
link: "concepts/kafka-proxies/http-proxy.md",
},
{
text: "Create a Simple REST API",
text: "Create a Simple CRUD API",
link: "tutorials/rest/rest-intro.md",
},
{
text: "Build a CQRS Todo App",
link: "tutorials/todo-app/build.md",
children: [
{
text: "Application Setup",
link: "tutorials/todo-app/build.md",
},
{
text: "Adding Auth",
link: "tutorials/todo-app/secure.md",
},
],
},
],
},
{
text: "SSE Kafka Proxy",
collapsible: true,
link: "concepts/kafka-proxies/sse-proxy.md",
children: [
{
text: "Overview",
link: "concepts/kafka-proxies/sse-proxy.md",
},
{
text: "Create a Simple SSE Stream",
link: "tutorials/sse/sse-intro.md",
},
{
text: "Build a CQRS Todo App",
link: "tutorials/sse/sse-todo-build.md",
link: "tutorials/todo-app/build.md",
children: [
{
text: "Application Setup",
link: "tutorials/sse/sse-todo-build.md",
link: "tutorials/todo-app/build.md",
},
{
text: "Adding Auth",
link: "tutorials/sse/sse-todo-secure.md",
link: "tutorials/todo-app/secure.md",
},
],
},
Expand Down
2 changes: 1 addition & 1 deletion src/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ features:
- title: Kafka Proxies
icon: fas fa-arrows-left-right-to-line
details: Define REST, SSE, gRPC and MQTT endpoints that map to Kafka topic streams.
link: /concepts/kafka-proxies/rest-proxy.html
link: /concepts/kafka-proxies/http-proxy.html

- title: Reference
icon: fas fa-book
Expand Down
16 changes: 10 additions & 6 deletions src/concepts/config-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ Bindings have a `kind`, indicating how they should behave, such as:
- `proxy` - Handles the translate or encode behaviors between components.
- `server` - Exists to decode a protocol on the inbound network stream, producing higher-level application streams for each request.
- `client` - Receives inbound application streams and encodes each as a network stream.
- `remote_server` - Exists to adapt `kafka` topic streams to higher-level application streams. Read more in the [kafka-grpc binding](../reference/config/bindings/binding-kafka-grpc.md#summary).
- `cache_client` & `cache_server` - Combined provide a persistent cache of `kafka` messages per `topic` `partition` honoring the `kafka` `topic` configuration for message expiration and compaction. Read more in the [kafka binding](../reference/config/bindings/binding-kafka.md#cache-behavior).
- `remote_server` - Exists to adapt `kafka` topic streams to higher-level application streams. Read more in the [kafka-grpc](../reference/config/bindings/binding-kafka-grpc.md#summary) binding.
- `cache_client` & `cache_server` - Combined provide a persistent cache of `kafka` messages per `topic` `partition` honoring the `kafka` `topic` configuration for message expiration and compaction. Read more in the [kafka](../reference/config/bindings/binding-kafka.md#cache-behavior) binding.

### Routes

Expand Down Expand Up @@ -68,6 +68,10 @@ A condition will attempt to match the target stream exactly against the configur
[kafka-grpc]:../reference/config/bindings/binding-kafka-grpc.md#routes
[mqtt-kafka]:../reference/config/bindings/binding-mqtt-kafka.md#routes

### Dynamic path parameters

Path segments can be parsed into named values of the `${params}` object and used in other parts of a binding. Meaning a `/tasks/123` path with a `/tasks/{id}` mapping will extract `123` in the `${params.id}` field.

### Routing With extra params

After the route logic matches, additional parameters are applied `with` the inbound data streams.
Expand All @@ -76,19 +80,19 @@ After the route logic matches, additional parameters are applied `with` the inbo

Routes with the `fetch` capability map retrieval requests from a Kafka topic, supporting filtered or unfiltered retrieval of messages from the topic partitions, merged into a unified response. Filtering can apply to the Kafka message key, message headers, or a combination of both message key and headers.

The [http-kafka binding](../reference/config/bindings/binding-http-kafka.md) provides additional support for extracting parameter values from the inbound HTTP request path. Successful `200 OK` HTTP responses include an `etag` header that can be used with `if-none-match` for subsequent conditional `GET` requests to check for updates. Rather than polling, HTTP requests can also include the `prefer wait=N` header to wait a maximum of `N` seconds before responding with `304 Not Modified` if not modified. When a new message arrives on the topic that would modify the response, all `prefer: wait=N` clients receive the response immediately with a corresponding new `etag`.
The [http-kafka](../reference/config/bindings/binding-http-kafka.md) binding provides additional support for extracting parameter values from the inbound HTTP request path. Successful `200 OK` HTTP responses include an [ETag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) header that can be used with [if-none-match](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-None-Match) for subsequent conditional `GET` requests to check for updates. Rather than polling, HTTP requests can also include the `prefer wait=N` header to wait a maximum of `N` seconds before responding with `304 Not Modified` if not modified. When a new message arrives on the topic that would modify the response, all `prefer: wait=N` clients receive the response immediately with a corresponding new ETag.

#### Reliable message delivery

With the [grpc-kafka binding](../reference/config/bindings/binding-grpc-kafka.md), using the fetch capability, reliable message delivery is achieved by capturing the value of the `reliability` `field` injected into each response stream message at the gRPC client, and replaying the value via the `reliability` `metadata` header when reestablishing the stream with a new gRPC request. This allows interrupted streams to pick up where they left off without missing messages in the response stream.
With the [grpc-kafka](../reference/config/bindings/binding-grpc-kafka.md) binding, using the fetch capability, reliable message delivery is achieved by capturing the value of the `reliability` `field` injected into each response stream message at the gRPC client, and replaying the value via the `reliability` `metadata` header when reestablishing the stream with a new gRPC request. This allows interrupted streams to pick up where they left off without missing messages in the response stream.

#### The Produce capability

Routes with the `produce` capability map any request-response network call to a correlated stream of Kafka messages. The request message(s) are sent to a `requests` topic with a `zilla:correlation-id` header. When the request message(s) are received and processed by the Kafka `requests` topic consumer, it produces response message(s) to the `responses` topic, with the same `zilla:correlation-id` header to correlate the response.

Requests with an `idempotency-key` header can be replayed and receive the same response. This requires the Kafka consumer to detect and ignore the duplicate request with the same `idempotency-key` and `zilla:correlation-id`. For this purpose, A log compacted topic can selectively remove records where a more recent update with the same primary key exists.
Requests with an `idempotency-key` header can be replayed and receive the same response. A Kafka consumer can detect and ignore any potential duplicate requests because they will have the same `idempotency-key` and `zilla:correlation-id`.

In the [http-kafka binding](../reference/config/bindings/binding-http-kafka.md), specifying `async` allows clients to include a `prefer: respond-async` header in the HTTP request to receive `202 Accepted` response with `location` response header.
In the [http-kafka](../reference/config/bindings/binding-http-kafka.md) binding, specifying `async` allows clients to include a `prefer: respond-async` header in the HTTP request to receive `202 Accepted` response with `location` response header.

A corresponding `routes[].when` object with a matching `GET` method and `location` path is also required for follow-up `GET` requests to return the same response as would have been returned if the `prefer: respond-async` request header had been omitted.

Expand Down
71 changes: 71 additions & 0 deletions src/concepts/kafka-proxies/http-proxy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
---
description: Zilla lets you configure application-centric REST API and SSE stream endpoints that unlock Kafka event-driven architectures.
prev: false
next: /tutorials/rest/rest-intro.md
---

# HTTP Kafka Proxy

Zilla lets you configure application-centric REST APIs and SSE streams that unlock Kafka event-driven architectures. A developer has the freedom to define their own HTTP mapping to Kafka, with control over the topics, message key, message headers, and payload. Any HTTP client can interact with Kafka without navigating Kafka-specific paradigms.

## Configure Endpoints

Zilla can map REST APIs to Kafka using the [http-kafka](../../reference/config/bindings/binding-http-kafka.md) binding in `zilla.yaml`. Zilla routes REST urls using [wildcard pattern matching](../../concepts/config-intro.md#pattern-matching) and [dynamic path params](../../concepts/config-intro.md#dynamic-path-parameters). Dynamic path matching and custom message routing from endpoints to Kafka topics help prevent API lock-in.

### HTTP request methods

Zilla separates the HTTP request methods into two groups called capabilities: produce and fetch. The [produce](../../concepts/config-intro.md#the-fetch-capability) capability handles method types `POST`, `PUT`, `DELETE`, and `PATCH` that produce messages onto Kafka topics. The [fetch](../../reference/config/bindings/binding-http-kafka.md#with-capability-fetch) capability handles the `GET` method that fetches messages from Kafka topics. One exception is for a route managing async correlation. The `produce` route will have two when clauses: a `PUT` clause for submission and a `GET` clause matching the `async.location` path returned to the caller.

## Correlated Request-Response

Zilla manages the HTTP lifecycle with the request and response payloads over a pair of Kafka topics. Each request message is correlated to the corresponding response message with a `zilla:correlation-id` header, providing an identifier for both Zilla and Kafka workflows to act on.

### sync

A synchronous interaction starts when a client calls an HTTP endpoint, producing a request message. The server will not respond immediately, waiting for the correlated response message. Once a message with the correct `zilla:correlation-id` header is delivered on the response topic it is fetched, responding to the initial request and returning the payload to the caller.

### async

An asynchronous interaction includes a `prefer: respond-async` header when calling an HTTP endpoint. After producing a request message, the connection will immediately return with `202 Accepted` plus the location path to retrieve a correlated response. The client then sends a `GET` request to the returned location path with the `prefer: wait=N` header to retrieve the correlated response. The request will wait for up to `N` seconds and return once a message with the correct `zilla:correlation-id` header is delivered on the response topic, removing the need for client polling.

## SSE Streaming

The Zilla Server-sent Events (SSE) Kafka Proxy exposes an SSE stream of Kafka messages using the [sse-kafka](../../reference/config/bindings/binding-sse-kafka.md) binding.

An [SSE](https://html.spec.whatwg.org/multipage/server-sent-events.html) server allows a web browser using the `EventSource` interface to send a request to an SSE endpoint and receive a stream of text from the server, interpreted as individual messages. Zilla relays text messages on a Kafka topic into the event stream.

### Message Filtering

The message source topic is defined in a route, and the route is matched by the path defined for the client to connect. A route can [filter](../../reference/config/bindings/binding-sse-kafka.md#routes-with) the messages delivered to the SSE stream using the message key and headers. A filter's value can be statically defined in the config or be pulled from a [path param](../../concepts/config-intro.md#dynamic-path-parameters).

### Reliable Delivery

Zilla sends an event `id` with every message. A client can send a `last-event-id` header to recover from an interrupted stream without message loss. The client doesn't need to acknowledge message receipt explicitly. An interrupted SSE stream can be recovered by connecting to any Zilla instance in the same auto-scaling group because each Zilla instance is stateless.

## Oneway

Clients can produce fire and forget HTTP request payload to a Kafka topic. The Kafka message key and headers are set using [path params](../../concepts/config-intro.md#dynamic-path-parameters).

## Idempotency

Requests can be idempotent (to make multiple identical requests and receive the same response every time) by including an `idempotency-key` header. Zilla will use the `idempotency-key` and `zilla:correlation-id` headers to identify and return the same message fetched from the response topic without producing a second message to the request topic. Each new `idempotency-key` used will produce a message with "at least once" delivery. A second message will be produced if the same request is made in the short window before a correlated response is added to the response topic. A Kafka consumer can detect and ignore any potential duplicate requests because they will have the same `idempotency-key` and `zilla:correlation-id`.

## Caching

Bindings can retrieve messages from a Kafka topic, filtered by message key and headers, with the key and header values extracted from the [path params](../../concepts/config-intro.md#dynamic-path-parameters).

An HTTP response returns with an [ETag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) header. This fetch supports a conditional [if-none-match](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-None-Match) request, returning `304` if not modified or `200` if modified (with a new ETag header). A client can wait for a modified response by including `prefer:wait=N` and `cache-control: no-cache` headers. The request will wait for up to `N` seconds and return once a message with a new ETag header is delivered on the response topic.

## CORS

Zilla supports Cross-Origin Resource Sharing (CORS) and allows you to specify fine-grained access control, including specific request origins, methods and headers allowed, and specific response headers exposed. Since it acts more like a guard and has no dependency on Apache Kafka configuration, you need to define it in the [http](../../reference/config/bindings/binding-http.md) binding.

## Authorization

Zilla has a modular config that includes the concept of a [Guard](../../reference/config/overview.md#guards) where you define your `guard` configuration and reference that `guard` to authorize a specific endpoint. JSON Web Token (JWT) authorization is supported with the [`jwt`](../../reference/config/guards/guard-jwt.md) Guard.

### SSE Continuous Authorization

Unlike REST, which authorizes individual requests, Zilla continuously authorizes the long-lived SSE connection stream. Zilla will send a "challenge" event, triggering the client to send up-to-date authorization credentials, such as a JWT token, before expiration. Zilla adheres to the secure by default method, meaning that the response stream is terminated if the authorization expires before the client responds to the "challenge" event.

Multiple SSE streams on the same HTTP/2 connection and authorized by the same JWT token are reauthorized by a single "challenge" event response from the client. They are all terminated if the token expiration isn't updated.
Loading

0 comments on commit abe2620

Please sign in to comment.