Skip to content

Commit

Permalink
Merge branch 'master' into feat/aot
Browse files Browse the repository at this point in the history
  • Loading branch information
latop2604 authored Sep 4, 2024
2 parents 7b658c5 + 5be2102 commit 3639bf4
Show file tree
Hide file tree
Showing 293 changed files with 19,355 additions and 2,789 deletions.
1 change: 1 addition & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
* @confluentinc/clients @confluentinc/data-governance
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ obj/
*.dylib
*.csproj.user
*.xproj.user
*.sln.*.user
.idea
.vs
.vscode
todo.txt
Expand Down
43 changes: 30 additions & 13 deletions .semaphore/semaphore.yml
Original file line number Diff line number Diff line change
Expand Up @@ -70,17 +70,19 @@ blocks:
- wget https://dot.net/v1/dotnet-install.ps1 -OutFile dotnet-install.ps1
- powershell -ExecutionPolicy ByPass -File dotnet-install.ps1 -Version 6.0.403 -InstallDir C:\dotnet
- $Env:Path += ";C:\dotnet"
- dotnet tool update -g docfx
- dotnet restore
- dotnet build Confluent.Kafka.sln -c ${Env:CONFIGURATION}
- dotnet pack src/Confluent.Kafka/Confluent.Kafka.csproj -c ${Env:CONFIGURATION} --version-suffix ci-${Env:SEMAPHORE_JOB_ID} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry/Confluent.SchemaRegistry.csproj -c ${Env:CONFIGURATION} --version-suffix ci-${Env:SEMAPHORE_JOB_ID} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Encryption/Confluent.SchemaRegistry.Encryption.csproj -c ${Env:CONFIGURATION} --version-suffix ci-${Env:SEMAPHORE_JOB_ID} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Encryption.Aws/Confluent.SchemaRegistry.Encryption.Aws.csproj -c ${Env:CONFIGURATION} --version-suffix ci-${Env:SEMAPHORE_JOB_ID} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Encryption.Azure/Confluent.SchemaRegistry.Encryption.Azure.csproj -c ${Env:CONFIGURATION} --version-suffix ci-${Env:SEMAPHORE_JOB_ID} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Encryption.Gcp/Confluent.SchemaRegistry.Encryption.Gcp.csproj -c ${Env:CONFIGURATION} --version-suffix ci-${Env:SEMAPHORE_JOB_ID} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Encryption.HcVault/Confluent.SchemaRegistry.Encryption.HcVault.csproj -c ${Env:CONFIGURATION} --version-suffix ci-${Env:SEMAPHORE_JOB_ID} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Rules/Confluent.SchemaRegistry.Rules.csproj -c ${Env:CONFIGURATION} --version-suffix ci-${Env:SEMAPHORE_JOB_ID} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Serdes.Avro/Confluent.SchemaRegistry.Serdes.Avro.csproj -c ${Env:CONFIGURATION} --version-suffix ci-${Env:SEMAPHORE_JOB_ID} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Serdes.Protobuf/Confluent.SchemaRegistry.Serdes.Protobuf.csproj -c ${Env:CONFIGURATION} --version-suffix ci-${Env:SEMAPHORE_JOB_ID} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Serdes.Json/Confluent.SchemaRegistry.Serdes.Json.csproj -c ${Env:CONFIGURATION} --version-suffix ci-${Env:SEMAPHORE_JOB_ID} --output artifacts
- docfx doc/docfx.json
- tar.exe -cvzf docs-${Env:SEMAPHORE_JOB_ID}.zip doc/_site/*
- move docs-${Env:SEMAPHORE_JOB_ID}.zip artifacts
- artifact push workflow artifacts
- name: 'Windows Artifacts on tagged commits'
run:
Expand All @@ -97,17 +99,19 @@ blocks:
- wget https://dot.net/v1/dotnet-install.ps1 -OutFile dotnet-install.ps1
- powershell -ExecutionPolicy ByPass -File dotnet-install.ps1 -Version 6.0.403 -InstallDir C:\dotnet
- $Env:Path += ";C:\dotnet"
- dotnet tool update -g docfx
- dotnet restore
- dotnet build Confluent.Kafka.sln -c ${Env:CONFIGURATION}
- dotnet pack src/Confluent.Kafka/Confluent.Kafka.csproj -c ${Env:CONFIGURATION} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry/Confluent.SchemaRegistry.csproj -c ${Env:CONFIGURATION} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Encryption/Confluent.SchemaRegistry.Encryption.csproj -c ${Env:CONFIGURATION} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Encryption.Aws/Confluent.SchemaRegistry.Encryption.Aws.csproj -c ${Env:CONFIGURATION} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Encryption.Azure/Confluent.SchemaRegistry.Encryption.Azure.csproj -c ${Env:CONFIGURATION} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Encryption.Gcp/Confluent.SchemaRegistry.Encryption.Gcp.csproj -c ${Env:CONFIGURATION} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Encryption.HcVault/Confluent.SchemaRegistry.Encryption.HcVault.csproj -c ${Env:CONFIGURATION} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Rules/Confluent.SchemaRegistry.Rules.csproj -c ${Env:CONFIGURATION} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Serdes.Avro/Confluent.SchemaRegistry.Serdes.Avro.csproj -c ${Env:CONFIGURATION} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Serdes.Protobuf/Confluent.SchemaRegistry.Serdes.Protobuf.csproj -c ${Env:CONFIGURATION} --output artifacts
- dotnet pack src/Confluent.SchemaRegistry.Serdes.Json/Confluent.SchemaRegistry.Serdes.Json.csproj -c ${Env:CONFIGURATION} --output artifacts
- docfx doc/docfx.json
- tar.exe -cvzf docs-${Env:SEMAPHORE_JOB_ID}.zip doc/_site/*
- move docs-${Env:SEMAPHORE_JOB_ID}.zip artifacts
- artifact push workflow artifacts
- name: 'Integration tests'
dependencies: [ ]
Expand All @@ -117,14 +121,27 @@ blocks:
type: s1-prod-ubuntu20-04-amd64-2
prologue:
commands:
- docker login --username $DOCKERHUB_USER --password $DOCKERHUB_APIKEY
- '[[ -z $DOCKERHUB_APIKEY ]] || docker login --username $DOCKERHUB_USER --password $DOCKERHUB_APIKEY'
jobs:
- name: 'Build and test'
- name: 'Build documentation'
commands:
- dotnet tool update -g docfx
- docfx doc/docfx.json
- name: 'Build and test with "classic" protocol'
commands:
- cd test/docker && docker-compose up -d && sleep 30 && cd ../..
- export SEMAPHORE_SKIP_FLAKY_TETSTS='true'
- export SEMAPHORE_SKIP_FLAKY_TESTS='true'
- dotnet restore
- cd test/Confluent.Kafka.IntegrationTests && dotnet test -l "console;verbosity=normal" && cd ../..
- name: 'Build and test with "consumer" protocol'
commands:
- cd test/docker && docker-compose -f docker-compose-kraft.yaml up -d && cd ../..
- sleep 300
- export SEMAPHORE_SKIP_FLAKY_TESTS='true'
- export TEST_CONSUMER_GROUP_PROTOCOL=consumer
- dotnet restore
- cd test/Confluent.Kafka.IntegrationTests && dotnet test -l "console;verbosity=normal" && cd ../..

- name: 'Schema registry and serdes integration tests'
dependencies: [ ]
task:
Expand All @@ -133,12 +150,12 @@ blocks:
type: s1-prod-ubuntu20-04-amd64-2
prologue:
commands:
- docker login --username $DOCKERHUB_USER --password $DOCKERHUB_APIKEY
- '[[ -z $DOCKERHUB_APIKEY ]] || docker login --username $DOCKERHUB_USER --password $DOCKERHUB_APIKEY'
jobs:
- name: 'Build and test'
commands:
- cd test/docker && docker-compose up -d && cd ../..
- export SEMAPHORE_SKIP_FLAKY_TETSTS='true'
- export SEMAPHORE_SKIP_FLAKY_TESTS='true'
- dotnet restore
- cd test/Confluent.SchemaRegistry.Serdes.IntegrationTests && dotnet test -l "console;verbosity=normal" && cd ../..
# - cd test/Confluent.SchemaRegistry.IntegrationTests && dotnet test -l "console;verbosity=normal" && cd ../..
1 change: 1 addition & 0 deletions 3RD_PARTY.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,4 @@ To add your project, open a pull request!
- [Chr.Avro](https://github.com/ch-robinson/dotnet-avro) - A modern and flexible Avro implementation for .NET. Integrates seamlessly with Confluent.Kafka and Schema Registry.
- [Multi Schema Avro Deserializer](https://github.com/ycherkes/multi-schema-avro-desrializer) - Avro deserializer for reading messages serialized with multiple schemas.
- [OpenSleigh.Transport.Kafka](https://github.com/mizrael/OpenSleigh/tree/develop/src/OpenSleigh.Transport.Kafka) - A Kafka Transport for OpenSleigh, a distributed saga management library.
- [SlimMessageBus.Host.Kafka](https://github.com/zarusz/SlimMessageBus) - Apache Kafka transport for SlimMessageBus (lightweight message bus for .NET)
94 changes: 83 additions & 11 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,75 @@
# 2.5.3

v2.5.3 is a maintenance release with the following fixes and enhancements:

## Enhancements

* References librdkafka.redist 2.5.3. Refer to the [librdkafka v2.5.3 release notes](https://github.com/confluentinc/librdkafka/releases/tag/v2.5.3) for more information.

## Fixes

* Properly handle messages with well-known types in Protobuf serializer
* Use AES128_GCM in the Local KMS client, for consistency with Java/go
* Include deleted schemas when getting schemas by subject and version
* Handle signed ints when transforming Protobuf payloads
* Allow null SchemaRegistryClient in AsyncSerde constructor

# 2.5.2

> [!WARNING]
Versions 2.5.0, 2.5.1 and 2.5.2 have a regression in which an assert is triggered during **PushTelemetry** call. This happens when no metric is matched on the client side among those requested by broker subscription.
>
> You won't face any problem if:
> * Broker doesn't support [KIP-714](https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability).
> * [KIP-714](https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability) feature is disabled on the broker side.
> * [KIP-714](https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability) feature is disabled on the client side. This is enabled by default. Set configuration `enable.metrics.push` to `false`.
> * If [KIP-714](https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability) is enabled on the broker side and there is no subscription configured there.
> * If [KIP-714](https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability) is enabled on the broker side with subscriptions that match the [KIP-714](https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability) metrics defined on the client.
>
> Having said this, we strongly recommend using `v2.5.3` and above to not face this regression at all.
## Fixes

- Fix CSFLE (client-side field-level encryption) to use the Google Tink format for DEKs for interoperability with clients in other languages (Java, go, etc.).
- Improve error when specifying an invalid KMS type for CSFLE
- Enhance CSFLE examples with KMS configuration settings


# 2.5.1

## Fixes

- Fix CSFLE (client-side field-level encryption) when using Azure Key Vault by specifying RsaOaep256 (instead of RsaOaep) for interoperability with clients in other languages (Java, go, etc.).
- Fix AvroSerializer configuration to allow using schema normalization.
- Upgrade Azure Identity library to 1.11.4 to address a vulnerability in previous versions.


# 2.5.0

## Enhancements

- References librdkafka.redist 2.5.0. Refer to the [librdkafka v2.5.0 release notes](https://github.com/confluentinc/librdkafka/releases/tag/v2.5.0) for more information.
- Add support for metadata and ruleSet in the schema registry client, which together support data
contracts.
- Add support for CSFLE (client-side field-level encryption) for AWS, Azure, GCP, and HashiCorp
Vault. See the encryption examples in the examples directory.
- Add support for CEL, CEL_FIELD, and JSONata rules.

## Fixes

- Switch license expression and other repo information. (#2192, @thompson-tomo)


# 2.4.0

## Enhancements

- References librdkafka.redist 2.4.0. Refer to the [librdkafka v2.4.0 release notes](https://github.com/confluentinc/librdkafka/releases/tag/v2.4.0) for more information.
- [KIP-848 EA](https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol): Added KIP-848 based new consumer group rebalance protocol.
Integration tests running with the new consumer group protocol. The feature is an **Early Access**: not production ready. Please refer
[detailed doc](https://github.com/confluentinc/librdkafka/blob/master/INTRODUCTION.md#next-generation-of-the-consumer-group-protocol-kip-848) for more information. (#2212).


# 2.3.0

## Enhancements
Expand All @@ -6,7 +78,7 @@
- [KIP-430](https://cwiki.apache.org/confluence/display/KAFKA/KIP-430+-+Return+Authorized+Operations+in+Describe+Responses):
Return authorized operations in describe responses (#2021, @jainruchir).
- [KIP-396](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97551484): Added support for ListOffsets Admin API (#2086).
- Add `Rack` to the `Node` type, so AdminAPI calls can expose racks for brokers (currently, all Describe
- Add `Rack` to the `Node` type, so AdminAPI calls can expose racks for brokers (currently, all Describe
Responses) (#2021, @jainruchir).
- Added support for external JSON schemas in `JsonSerializer` and `JsonDeserializer` (#2042).
- Added compatibility methods to CachedSchemaRegistryClient ([ISBronny](https://github.com/ISBronny), #2097).
Expand Down Expand Up @@ -89,15 +161,15 @@ OpenSSL 3.0.x upgrade in librdkafka requires a major version bump, as some legac
**Note: There were no 2.0.0 and 2.0.1 releases.**


# 1.9.3
# 1.9.3

## Enhancements

- Added `NormalizeSchemas` configuration property to the Avro, Json and Protobuf serdes.

## Fixes

- Schema Registry authentication now works with passwords that contain the ':' character ([luismedel](https://github.com/luismedel)).
- Schema Registry authentication now works with passwords that contain the ':' character ([luismedel](https://github.com/luismedel)).
- Added missing librdkafka internal and broker error codes to the `ErrorCode` enum.


Expand Down Expand Up @@ -160,7 +232,7 @@ for a complete list of changes, enhancements, fixes and upgrade considerations.

# 1.8.1

## Enhancements
## Enhancements

- Updated `NJsonSchema` to v10.5.2.

Expand Down Expand Up @@ -309,7 +381,7 @@ Version 1.6.0 and 1.6.1 were not released.
## Changes

- Some internal improvements to the `Consmer` (thanks to [@andypook](https://github.com/AndyPook)).
- BREAKING CHANGE: `net452` is no longer a target framework of `Confluent.SchemaRegistry` or `Confluent.SchemaRegistry.Serdes` due to the switch to the official Apache Avro package which only targets `netstandard2.0`.
- BREAKING CHANGE: `net452` is no longer a target framework of `Confluent.SchemaRegistry` or `Confluent.SchemaRegistry.Serdes` due to the switch to the official Apache Avro package which only targets `netstandard2.0`.
- Marked properties on `ConsumeResult` that simply delegate to the corresponding properties on `ConsumeResult.Message` as obsolete.

## Fixes
Expand Down Expand Up @@ -351,7 +423,7 @@ Version 1.6.0 and 1.6.1 were not released.
## Bugs

**WARNING: There is an issue with SASL GSSAPI authentication on Windows with this release. This is resolved in v1.2.1.**

## Enhancements

- References librdkafka v1.2.0. Refer to the [release notes](https://github.com/edenhill/librdkafka/releases/tag/v1.2.0) for more information. Headline feature is consumer side support for transactions.
Expand Down Expand Up @@ -415,7 +487,7 @@ Feature highlights:
- Non-blocking support for async serializers.
- Very flexible:
- e.g. can be easily extended to support header serialization.
- Capability to specify custom timestamps when producing messages.
- Capability to specify custom timestamps when producing messages.
- Message persistence status support.
- Renamed ProduceAsync variants with a callback to Produce.
- Consumer improvements:
Expand Down Expand Up @@ -532,7 +604,7 @@ Feature highlights:

- Revamped producer and consumer serialization functionality.
- There are now two types of serializer and deserializer: `ISerializer<T>` / `IAsyncSerializer<T>` and `IDeserializer<T>` / `IAsyncDeserializer<T>`.
- `ISerializer<T>`/`IDeserializer<T>` are appropriate for most use cases.
- `ISerializer<T>`/`IDeserializer<T>` are appropriate for most use cases.
- `IAsyncSerializer<T>`/`IAsyncDeserializer<T>` are async friendly, but less performant (they return `Task`s).
- Changed the name of `Confluent.Kafka.Avro` to `Confluent.SchemaRegistry.Serdes` (Schema Registry may support other serialization formats in the future).
- Added an example demonstrating working with protobuf serialized data.
Expand All @@ -548,7 +620,7 @@ Feature highlights:
- Notable features: idempotent producer, sparse connections, KIP-62 (max.poll.interval.ms).
- Note: End of partition notification is now disabled by default (enable using the `EnablePartitionEof` config property).
- Removed the `Consumer.OnPartitionEOF` event in favor notifying of partition eof via `ConsumeResult.IsPartitionEOF`.
- Removed `ErrorEvent` class and added `IsFatal` to `Error` class.
- Removed `ErrorEvent` class and added `IsFatal` to `Error` class.
- The `IsFatal` flag is now set appropriately for all errors (previously it was always set to `false`).
- Added `PersistenceStatus` property to `DeliveryResult`, which provides information on the persitence status of the message.

Expand Down Expand Up @@ -586,7 +658,7 @@ Feature highlights:
- Producers can utilize the underlying librdkafka handle from other Producers (replaces the 0.11.x `GetSerializingProducer` method on the `Producer` class).
- `AdminClient` can utilize the underlying librdkafka handle from other `AdminClient`s, `Producer`s or `Consumer`s.
- `IDeserializer` now exposes message data via `ReadOnlySpan<byte>`, directly referencing librdkafka allocated memory. This results in a considerable (up to 2x) performance increase and reduced memory.
- Most blocking operations now accept a `CancellationToken` parameter.
- Most blocking operations now accept a `CancellationToken` parameter.
- TODO: in some cases there is no backing implementation yet.
- .NET Specific configuration parameters are all specified/documented in the `ConfigPropertyNames` class.

Expand All @@ -612,7 +684,7 @@ Feature highlights:
- `Commit` errors are reported via an exception and method return values have correspondingly changed.
- `ListGroups`, `ListGroup`, `GetWatermarkOffsets`, `QueryWatermarkOffsets`, and `GetMetadata` have been removed from `Producer` and `Consumer` and exposed only via `AdminClient`.
- Added `Consumer.Close`.
- Various methods that formerly returned `TopicPartitionOffsetError` / `TopicPartitionError` now return `TopicPartitionOffset` / `TopicPartition` and throw an exception in
- Various methods that formerly returned `TopicPartitionOffsetError` / `TopicPartitionError` now return `TopicPartitionOffset` / `TopicPartition` and throw an exception in
case of error (with a `Result` property of type `TopicPartitionOffsetError` / `TopicPartitionError`).


Expand Down
Loading

0 comments on commit 3639bf4

Please sign in to comment.