fix(deps): update module github.com/confluentinc/confluent-kafka-go to v2 #2642
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v1.9.2
->v2.3.0
Release Notes
confluentinc/confluent-kafka-go (github.com/confluentinc/confluent-kafka-go)
v2.3.0
Compare Source
This is a feature release.
DescribeCluster()
andDescribeTopics()
(#964, @jainruchir).
Return authorized operations in Describe Responses.
(#964, @jainruchir).
Rack
to theNode
type, so AdminAPI calls can expose racks for brokers(currently, all Describe Responses) (#964, @jainruchir).
the addition of ListOffsets (#1029).
GetSchemaMetadata
(#1042).availability problems (#998, @kkoehler).
CreateTopic
method to the MockCluster. (#1047, @mimikwang).HTTPS_PROXY
environment variable, if set, for the Schema Registryclient (#1065, @finncolman).
Partial support of topic identifiers. Topic identifiers in metadata response
are available through the new
DescribeTopics
function (#1068).Fixes
returned for pre-registered schema (#971, @srlk).
(#1074).
testing
was being imported by a non-test file,testhelpers.go. (#1049, @dmlambea).
Coordinator
field inConsumerGroupDescription
in caseit's not known. It now contains a
Node
with ID -1 in that case.Avoids a C segmentation fault.
Producer.Flush
. It was waiting forqueue.buffering.max.ms
while flushing (#1013).consumer was closing, and during the final partition revoke (#1073).
confluent-kafka-go is based on librdkafka v2.3.0, see the
librdkafka v2.3.0 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v2.2.0
Compare Source
This is a feature release.
IncrementalAlterConfigs API (#945).
User SASL/SCRAM credentials alteration and description (#1004).
Fixes
Serializer.Serialize()
, caused due toan unchecked error (#997, @baganokodo2022).
nil
slice inAdminClient.ListConsumerGroupOffsets
, when earlier itwas not processing that correctly (#985, @alexandredantas).
confluent-kafka-go is based on librdkafka v2.2.0, see the
librdkafka v2.2.0 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v2.1.1
This is a maintenance release.
It is strongly recommended to update to v2.1.1 if v2.1.0 is being used, as it
fixes a critical issue in the consumer (#980).
confluent-kafka-go is based on librdkafka v2.1.1, see the
librdkafka v2.1.1 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v2.1.0
This is a feature release:
SeekPartitions()
method to seek multiple partitions atonce and deprecated
Seek()
(#940).add offset leader epoch to the TopicPartition
and Message structs (#968).
(#973).
panic when they are used after close (#901).
correctly (#950).
Offset.Set()
(#962, @jdockerty).confluent-kafka-go is based on librdkafka v2.1.0, see the
librdkafka v2.1.0 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v2.0.2
This is a feature release:
AdminClient) allows modifying the stored SASL PLAIN/SCRAM credentials that
will be used for subsequent (new) connections to a broker.
ProduceChannel()
) and channel basedconsumer (Consumer
Events()
) are deprecated.IsTimeout()
on Error type. This is a convenience method that checksif the error is due to a timeout.
Seek()
is now ignored and an infinite timeout isused, the method will block until the fetcher state is updated (typically
within microseconds).
Add Consumer Group operations to Admin API.
Allow listing consumer groups per state.
Partially implemented: support for AlterConsumerGroupOffsets.
ListConsumerGroups
Admin operation. Supports listing by state.DescribeConsumerGroups
Admin operation. Supports multiple groups.DeleteConsumerGroups
Admin operation. Supports multiple groups (@vsantwana).ListConsumerGroupOffsets
Admin operation. Currently, only supports1 group with multiple partitions. Supports the
requireStable
option.AlterConsumerGroupOffsets
Admin operation. Currently, only supports1 group with multiple offsets.
SetRoundtripDuration
to the mock broker for setting RTT delay fora given mock broker (@kkoehler, #892).
Fixes
SpecificDeserializer.Deserialize
method was not returning its resultresult correctly, and was hence unusable. The return has been fixed (#849).
SerializerConfig
,was ignored. It is now used as expected (@perdue, #870).
panic. This was due to a
nil
pointer, fixed with proper initialization(@HansK-p, @ju-popov, #878).
Upgrade considerations
ciphers need to be explicitly configured to continue working, but it is highly
recommended not to use them.
The rest of the API remains backward compatible, see the librdkafka release notes
below for details.
added to the module name, and package imports must reflect this change.
confluent-kafka-go is based on librdkafka v2.0.2, see the
librdkafka v2.0.0 release notes
and later ones for a complete list of changes, enhancements, fixes and upgrade considerations.
Note: There were no confluent-kafka-go v2.0.0 or v2.0.1 releases.
v1.9.2
This is a maintenance release:
confluent-kafka-go is based on librdkafka v1.9.2, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v1.9.1
This is a feature release:
confluent-kafka-go is based on librdkafka v1.9.1, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v1.9.0
This is a feature release:
for a real Kafka cluster (by @SourceFellows and @kkoehler, #729).
See examples/mock_cluster.
Fixes
#798).
needed (@jliunyu, #757).
confluent-kafka-go is based on librdkafka v1.9.0, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v1.8.2
This is a maintenance release:
confluent-kafka-go is based on librdkafka v1.8.2, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
Note: There were no confluent-kafka-go v1.8.0 and v1.8.1 releases.
v1.7.0
Enhancements
Message.Headers
if the Producer'sgo.delivery.report.fields
configuration property is set to include
headers
, e.g.:"go.delivery.report.fields": "key,value,headers"
This comes at a performance cost and are thus disabled by default.
Fixes
ReplicationFactor without specifying an explicit ReplicaAssignment, this is
now fixed.
confluent-kafka-go is based on librdkafka v1.7.0, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v1.6.1
v1.6.1 is a feature release:
for an example how to use the new incremental rebalancing consumer.
latency by sticking to a single random partition for some time.
now be used for multiple input partitions.
confluent-kafka-go is based on and bundles librdkafka v1.6.1, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
Enhancements
go.delivery.report.fields=all,key,value,none
can now be used toavoid copying message key and/or value to the delivery report, improving
performance in high-throughput applications (by @kevinconaway).
Fixes
callback, this is now fixed.
v1.5.2
v1.5.2 is a maintenance release with the following fixes and enhancements:
confluent-kafka-go is based on librdkafka v1.5.2, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v2.1.1
Compare Source
This is a maintenance release.
It is strongly recommended to update to v2.1.1 if v2.1.0 is being used, as it
fixes a critical issue in the consumer (#980).
confluent-kafka-go is based on librdkafka v2.1.1, see the
librdkafka v2.1.1 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v2.1.0
Compare Source
This is a feature release:
SeekPartitions()
method to seek multiple partitions atonce and deprecated
Seek()
(#940).add offset leader epoch to the TopicPartition
and Message structs (#968).
(#973).
panic when they are used after close (#901).
correctly (#950).
Offset.Set()
(#962, @jdockerty).confluent-kafka-go is based on librdkafka v2.1.0, see the
librdkafka v2.1.0 release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.
v2.0.2
Compare Source
This is a feature release:
AdminClient) allows modifying the stored SASL PLAIN/SCRAM credentials that
will be used for subsequent (new) connections to a broker.
ProduceChannel()
) and channel basedconsumer (Consumer
Events()
) are deprecated.IsTimeout()
on Error type. This is a convenience method that checksif the error is due to a timeout.
Seek()
is now ignored and an infinite timeout isused, the method will block until the fetcher state is updated (typically
within microseconds).
Add Consumer Group operations to Admin API.
Allow listing consumer groups per state.
Partially implemented: support for AlterConsumerGroupOffsets.
ListConsumerGroups
Admin operation. Supports listing by state.DescribeConsumerGroups
Admin operation. Supports multiple groups.DeleteConsumerGroups
Admin operation. Supports multiple groups (@vsantwana).ListConsumerGroupOffsets
Admin operation. Currently, only supports1 group with multiple partitions. Supports the
requireStable
option.AlterConsumerGroupOffsets
Admin operation. Currently, only supports1 group with multiple offsets.
SetRoundtripDuration
to the mock broker for setting RTT delay fora given mock broker (@kkoehler, #892).
Fixes
SpecificDeserializer.Deserialize
method was not returning its resultresult correctly, and was hence unusable. The return has been fixed (#849).
SerializerConfig
,was ignored. It is now used as expected (@perdue, #870).
panic. This was due to a
nil
pointer, fixed with proper initialization(@HansK-p, @ju-popov, #878).
Upgrade considerations
ciphers need to be explicitly configured to continue working, but it is highly
recommended not to use them.
The rest of the API remains backward compatible, see the librdkafka release notes
below for details.
added to the module name, and package imports must reflect this change.
confluent-kafka-go is based on librdkafka v2.0.2, see the
librdkafka v2.0.0 release notes
and later ones for a complete list of changes, enhancements, fixes and upgrade considerations.
Note: There were no confluent-kafka-go v2.0.0 or v2.0.1 releases.
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate. View repository job log here.