Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(deps): update module github.com/confluentinc/confluent-kafka-go to v2 #2642

Closed

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Nov 15, 2023

Mend Renovate logo banner

This PR contains the following updates:

Package Type Update Change
github.com/confluentinc/confluent-kafka-go require major v1.9.2 -> v2.3.0

Release Notes

confluentinc/confluent-kafka-go (github.com/confluentinc/confluent-kafka-go)

v2.3.0

Compare Source

This is a feature release.

  • Adds support for AdminAPI DescribeCluster() and DescribeTopics()
    (#​964, @​jainruchir).
  • KIP-430:
    Return authorized operations in Describe Responses.
    (#​964, @​jainruchir).
  • Adds Rack to the Node type, so AdminAPI calls can expose racks for brokers
    (currently, all Describe Responses) (#​964, @​jainruchir).
  • KIP-396: completed the implementation with
    the addition of ListOffsets (#​1029).
  • Adds cache for Schema Registry client's GetSchemaMetadata (#​1042).
  • MockCluster can now be shutdown and started again to test broker
    availability problems (#​998, @​kkoehler).
  • Adds CreateTopic method to the MockCluster. (#​1047, @​mimikwang).
  • Honor HTTPS_PROXY environment variable, if set, for the Schema Registry
    client (#​1065, @​finncolman).
  • KIP-516:
    Partial support of topic identifiers. Topic identifiers in metadata response
    are available through the new DescribeTopics function (#​1068).

Fixes

  • Fixes a bug in the mock schema registry client where the wrong ID was being
    returned for pre-registered schema (#​971, @​srlk).
  • The minimum version of Go supported has been changed from 1.16 to 1.17
    (#​1074).
  • Fixes an issue where testing was being imported by a non-test file,
    testhelpers.go. (#​1049, @​dmlambea).
  • Fixes the optional Coordinator field in ConsumerGroupDescription in case
    it's not known. It now contains a Node with ID -1 in that case.
    Avoids a C segmentation fault.
  • Fixes an issue with Producer.Flush. It was waiting for
    queue.buffering.max.ms while flushing (#​1013).
  • Fixes an issue where consumer methods would not be allowed to run while the
    consumer was closing, and during the final partition revoke (#​1073).

confluent-kafka-go is based on librdkafka v2.3.0, see the
librdkafka v2.3.0 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v2.2.0

Compare Source

This is a feature release.

Fixes

  • Fixes a nil pointer bug in the protobuf Serializer.Serialize(), caused due to
    an unchecked error (#​997, @​baganokodo2022).
  • Fixes incorrect protofbuf FileDescriptor references (#​989, @​Mrmann87).
  • Allow fetching all partition offsets for a consumer group by passing a
    nil slice in AdminClient.ListConsumerGroupOffsets, when earlier it
    was not processing that correctly (#​985, @​alexandredantas).
  • Deprecate m.LeaderEpoch in favor of m.TopicPartition.LeaderEpoch (#​1012).

confluent-kafka-go is based on librdkafka v2.2.0, see the
librdkafka v2.2.0 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v2.1.1

This is a maintenance release.

It is strongly recommended to update to v2.1.1 if v2.1.0 is being used, as it
fixes a critical issue in the consumer (#​980).

confluent-kafka-go is based on librdkafka v2.1.1, see the
librdkafka v2.1.1 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v2.1.0

This is a feature release:

  • Added Consumer SeekPartitions() method to seek multiple partitions at
    once and deprecated Seek() (#​940).
  • KIP-320:
    add offset leader epoch to the TopicPartition
    and Message structs (#​968).
  • The minimum version of Go supported has been changed from 1.14 to 1.16
    (#​973).
  • Add validation on the Producer, the Consumer and the AdminClient to prevent
    panic when they are used after close (#​901).
  • Fix bug causing schema-registry URL with existing path to not be parsed
    correctly (#​950).
  • Support for Offset types on Offset.Set() (#​962, @​jdockerty).
  • Added example for using rebalance callback with manual commit.

confluent-kafka-go is based on librdkafka v2.1.0, see the
librdkafka v2.1.0 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v2.0.2

This is a feature release:

  • Added SetSaslCredentials. This new method (on the Producer, Consumer, and
    AdminClient) allows modifying the stored SASL PLAIN/SCRAM credentials that
    will be used for subsequent (new) connections to a broker.
  • Channel based producer (Producer ProduceChannel()) and channel based
    consumer (Consumer Events()) are deprecated.
  • Added IsTimeout() on Error type. This is a convenience method that checks
    if the error is due to a timeout.
  • The timeout parameter on Seek() is now ignored and an infinite timeout is
    used, the method will block until the fetcher state is updated (typically
    within microseconds).
  • The minimum version of Go supported has been changed from 1.11 to 1.14.
  • KIP-222
    Add Consumer Group operations to Admin API.
  • KIP-518
    Allow listing consumer groups per state.
  • KIP-396
    Partially implemented: support for AlterConsumerGroupOffsets.
  • As result of the above KIPs, added (#​923)
    • ListConsumerGroups Admin operation. Supports listing by state.
    • DescribeConsumerGroups Admin operation. Supports multiple groups.
    • DeleteConsumerGroups Admin operation. Supports multiple groups (@​vsantwana).
    • ListConsumerGroupOffsets Admin operation. Currently, only supports
      1 group with multiple partitions. Supports the requireStable option.
    • AlterConsumerGroupOffsets Admin operation. Currently, only supports
      1 group with multiple offsets.
  • Added SetRoundtripDuration to the mock broker for setting RTT delay for
    a given mock broker (@​kkoehler, #​892).
  • Built-in support for Linux/ arm64. (#​933).
Fixes
  • The SpecificDeserializer.Deserialize method was not returning its result
    result correctly, and was hence unusable. The return has been fixed (#​849).
  • The schema ID to use during serialization, specified in SerializerConfig,
    was ignored. It is now used as expected (@​perdue, #​870).
  • Creating a new schema registry client with an SSL CA Certificate led to a
    panic. This was due to a nil pointer, fixed with proper initialization
    (@​HansK-p, @​ju-popov, #​878).
Upgrade considerations
  • OpenSSL 3.0.x upgrade in librdkafka requires a major version bump, as some legacy
    ciphers need to be explicitly configured to continue working, but it is highly
    recommended not to use them.
    The rest of the API remains backward compatible, see the librdkafka release notes
    below for details.
  • As required by the Go module system, a suffix with the new major version has been
    added to the module name, and package imports must reflect this change.

confluent-kafka-go is based on librdkafka v2.0.2, see the
librdkafka v2.0.0 release notes
and later ones for a complete list of changes, enhancements, fixes and upgrade considerations.

Note: There were no confluent-kafka-go v2.0.0 or v2.0.1 releases.

v1.9.2

This is a maintenance release:

confluent-kafka-go is based on librdkafka v1.9.2, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v1.9.1

This is a feature release:

confluent-kafka-go is based on librdkafka v1.9.1, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v1.9.0

This is a feature release:

Fixes

confluent-kafka-go is based on librdkafka v1.9.0, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v1.8.2

This is a maintenance release:

  • Bundles librdkafka v1.8.2
  • Check termination channel while reading delivery reports (by @​zjj)
  • Added convenience method Consumer.StoreMessage() (@​finncolman, #​676)

confluent-kafka-go is based on librdkafka v1.8.2, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

Note: There were no confluent-kafka-go v1.8.0 and v1.8.1 releases.

v1.7.0

Enhancements
  • Experimental Windows support (by @​neptoess).
  • The produced message headers are now available in the delivery report
    Message.Headers if the Producer's go.delivery.report.fields
    configuration property is set to include headers, e.g.:
    "go.delivery.report.fields": "key,value,headers"
    This comes at a performance cost and are thus disabled by default.
Fixes
  • AdminClient.CreateTopics() previously did not accept default value(-1) of
    ReplicationFactor without specifying an explicit ReplicaAssignment, this is
    now fixed.

confluent-kafka-go is based on librdkafka v1.7.0, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v1.6.1

v1.6.1 is a feature release:

  • KIP-429: Incremental consumer rebalancing - see cooperative_consumer_example.go
    for an example how to use the new incremental rebalancing consumer.
  • KIP-480: Sticky producer partitioner - increase throughput and decrease
    latency by sticking to a single random partition for some time.
  • KIP-447: Scalable transactional producer - a single transaction producer can
    now be used for multiple input partitions.

confluent-kafka-go is based on and bundles librdkafka v1.6.1, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

Enhancements
  • go.delivery.report.fields=all,key,value,none can now be used to
    avoid copying message key and/or value to the delivery report, improving
    performance in high-throughput applications (by @​kevinconaway).
Fixes
  • Consumer.Close() previously did not trigger the final RevokePartitions
    callback, this is now fixed.

v1.5.2

v1.5.2 is a maintenance release with the following fixes and enhancements:

  • Bundles librdkafka v1.5.2 - see release notes for all enhancements and fixes.
  • Documentation fixes

confluent-kafka-go is based on librdkafka v1.5.2, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v2.1.1

Compare Source

This is a maintenance release.

It is strongly recommended to update to v2.1.1 if v2.1.0 is being used, as it
fixes a critical issue in the consumer (#​980).

confluent-kafka-go is based on librdkafka v2.1.1, see the
librdkafka v2.1.1 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v2.1.0

Compare Source

This is a feature release:

  • Added Consumer SeekPartitions() method to seek multiple partitions at
    once and deprecated Seek() (#​940).
  • KIP-320:
    add offset leader epoch to the TopicPartition
    and Message structs (#​968).
  • The minimum version of Go supported has been changed from 1.14 to 1.16
    (#​973).
  • Add validation on the Producer, the Consumer and the AdminClient to prevent
    panic when they are used after close (#​901).
  • Fix bug causing schema-registry URL with existing path to not be parsed
    correctly (#​950).
  • Support for Offset types on Offset.Set() (#​962, @​jdockerty).
  • Added example for using rebalance callback with manual commit.

confluent-kafka-go is based on librdkafka v2.1.0, see the
librdkafka v2.1.0 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v2.0.2

Compare Source

This is a feature release:

  • Added SetSaslCredentials. This new method (on the Producer, Consumer, and
    AdminClient) allows modifying the stored SASL PLAIN/SCRAM credentials that
    will be used for subsequent (new) connections to a broker.
  • Channel based producer (Producer ProduceChannel()) and channel based
    consumer (Consumer Events()) are deprecated.
  • Added IsTimeout() on Error type. This is a convenience method that checks
    if the error is due to a timeout.
  • The timeout parameter on Seek() is now ignored and an infinite timeout is
    used, the method will block until the fetcher state is updated (typically
    within microseconds).
  • The minimum version of Go supported has been changed from 1.11 to 1.14.
  • KIP-222
    Add Consumer Group operations to Admin API.
  • KIP-518
    Allow listing consumer groups per state.
  • KIP-396
    Partially implemented: support for AlterConsumerGroupOffsets.
  • As result of the above KIPs, added (#​923)
    • ListConsumerGroups Admin operation. Supports listing by state.
    • DescribeConsumerGroups Admin operation. Supports multiple groups.
    • DeleteConsumerGroups Admin operation. Supports multiple groups (@​vsantwana).
    • ListConsumerGroupOffsets Admin operation. Currently, only supports
      1 group with multiple partitions. Supports the requireStable option.
    • AlterConsumerGroupOffsets Admin operation. Currently, only supports
      1 group with multiple offsets.
  • Added SetRoundtripDuration to the mock broker for setting RTT delay for
    a given mock broker (@​kkoehler, #​892).
  • Built-in support for Linux/ arm64. (#​933).
Fixes
  • The SpecificDeserializer.Deserialize method was not returning its result
    result correctly, and was hence unusable. The return has been fixed (#​849).
  • The schema ID to use during serialization, specified in SerializerConfig,
    was ignored. It is now used as expected (@​perdue, #​870).
  • Creating a new schema registry client with an SSL CA Certificate led to a
    panic. This was due to a nil pointer, fixed with proper initialization
    (@​HansK-p, @​ju-popov, #​878).
Upgrade considerations
  • OpenSSL 3.0.x upgrade in librdkafka requires a major version bump, as some legacy
    ciphers need to be explicitly configured to continue working, but it is highly
    recommended not to use them.
    The rest of the API remains backward compatible, see the librdkafka release notes
    below for details.
  • As required by the Go module system, a suffix with the new major version has been
    added to the module name, and package imports must reflect this change.

confluent-kafka-go is based on librdkafka v2.0.2, see the
librdkafka v2.0.0 release notes
and later ones for a complete list of changes, enhancements, fixes and upgrade considerations.

Note: There were no confluent-kafka-go v2.0.0 or v2.0.1 releases.


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

@renovate renovate bot requested review from a team as code owners November 15, 2023 00:04
@pellared pellared closed this Nov 15, 2023
@github-actions github-actions bot locked and limited conversation to collaborators Nov 15, 2023
@renovate renovate bot deleted the renovate/github.com-confluentinc-confluent-kafka-go-2.x branch November 15, 2023 09:28
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant