A collection of plugins for Apache Kafka® that are used to enforce configurable controls.
ℹ️
|
KAFKA is a registered trademark of The Apache Software Foundation and has been licensed for use by kas-broker-plugins. kas-broker-plugins has no affiliation with and is not endorsed by The Apache Software Foundation. |
Currently included here are:
-
io.bf2.kafka.authorizer.CustomAclAuthorizer
- an Authorizer implementation -
io.bf2.kafka.topic.ManagedKafkaCreateTopicPolicy
- an CreateTopicPolicy implementation -
io.bf2.kafka.config.ManagedKafkaAlterConfigPolicy
- an AlterConfigPolicy implementation
The objective is to use this Authorizer with the Strimzi Cluster Operator when the Strimzi User Operator is not being used, to configure a custom Authorizer that can apply to all the users in the kafka cluster.
There is provision to configure superUser
to allow any administrative work.
The Authorizer class and the ACLs for any resource will be defined using Strimzi’s Kafka CR. A sample CR fragment looks like:
kafka:
spec:
authorization:
type: custom
authorizerClass: io.bf2.kafka.authorizer.CustomAclAuthorizer
superUsers:
- CN=sre_user
config:
kas.authorizer.allowed-listeners=plain-9092,canary-9096
kas.authorizer.acl.1: permission=allow;topic=foo;operations=read,write,create
kas.authorizer.acl.2: permission=allow;topic=*;operations=read
kas.authorizer.acl.3: permission=deny;group=xyz;operations=read,create
allowed-listeners
defines a list of listener names separated by comma(,) which are treated as super user access, any user that made a request using one of this listeners those requests will always be allowed.
Where one could define properties prefixed with acl.
and incrementing counter to define any number of ACL permissions.
Each line would define a single rule for one type of resource.
The syntax is a simple key/value pair separated by semicolon(;).
permission
, resource-type
and operations
are defined as recognized keys
permission
values are (only one value accepted):
-
allow
-
deny
Recognized resource-type
values are:
-
topic
-
group
-
cluster
-
transactional_id
-
delegation_token
For value, the name of the topic or name of the group or a regular expression matching the name of resource is accepted.
For example, a topic=*
matches all topics.
operations
is a comma separated list of operations this rule either allows or denies.
The supported values are:
-
all
-
read
-
write
-
create
-
delete
-
alter
-
describe
-
cluster_action
-
describe_configs
-
alter_configs
-
idempotent_write
.
ℹ️
|
All enum properties above match exactly to the enums defined in the Apache Kafka® Java libraries. |
The ACL permissions will be evaluated in the order they are defined. When a rule matches, the Authorizer will return that rule’s evaluation as a decision. When no rules match, then by default it is denied. So, if ACL is not defined for a resource type then it automatically gets denied.
The objective of this policy is to:
-
validate the topic replication factor (default allows 3 only)
-
validate number of partitions in the cluster
-
validate the create topic configs
Here are the configuration description:
property | description | default value |
---|---|---|
kas.policy.create-topic.max.partitions |
Specifies the upper limit of partitions that should be allowed in the cluster. If this property is not specified, the default of |
-1 |
kas.policy.create-topic.partition-counter.timeout-seconds |
Specifies the duration in seconds to use as a timeout when listing and describing topics. |
10 |
kas.policy.create-topic.partition-counter.private-topic-prefix |
Specifies the prefix used to match against topic name to detect private/internal topics. |
"" |
kas.policy.create-topic.partition-counter.schedule-interval-seconds |
Specifies the interval (in seconds) between partition counts. |
15 |
kas.policy.create-topic.partition-limit-enforced |
|
false |
kas.policy.topic-config.enforced |
Specifies a mapping between a topic policy key and a value. The property is expressed as a comma separated list of |
[] |
kas.policy.topic-config.mutable |
Specifies which properties of topic are allowed to be configured. It’s a comma separated list. The configs in |
[] |
kas.policy.topic-config.range |
Specifies a mapping from config key to an inclusive/closed range of values. Entries are formatted as a triplet using |
[] |
kas.policy.topic-config.topic-config-policy-enforced |
Feature flag broker property key to allow enabling/disabling of topic config policies. |
false |
To configure the create topic policy, you should add config in the Kafka CR. A sample CR fragment looks like:
kafka:
spec:
config:
create.topic.policy.class.name=io.bf2.kafka.topic.ManagedKafkaCreateTopicPolicy
alter.config.policy.class.name=io.bf2.kafka.config.ManagedKafkaAlterConfigPolicy
# partition limit setting
kas.policy.create-topic.max.partitions=1000
kas.policy.create-topic.partition-counter.timeout-seconds=10
kas.policy.create-topic.partition-counter.private-topic-prefix="__redhat"
kas.policy.create-topic.partition-counter.schedule-interval-seconds=15
kas.policy.create-topic.partition-limit-enforced=true
# topic config setting
kas.policy.topic-config.enforced=compression.type:producer,segment.jitter.ms:0
kas.policy.topic-config.mutable=cleanup.policy,delete.retention.ms,retention.bytes,retention.ms
kas.policy.topic-config.range=segment.bytes:52428800:,segment.ms:600000:
The objective of this policy is to validate the alter topic configs
The configuration and the description is the same as ManagedKafkaCreateTopicPolicy, except ManagedKafkaAlterConfigPolicy only accepts topic-config configs, i.e.
-
kas.policy.topic-config.enforced
-
kas.policy.topic-config.mutable
-
kas.policy.topic-config.range
To configure this with Strimzi, this component needs be built and have the maven artifact available in a Maven repository, that can be reached by the Strimzi build.
Then configure the this plugin as a Third Party library such that it will be pulled into the Strimzi Operator image.
See strimzi/docker-images/kafka/kafka-thirdparty-libs
and add the dependency to one of the pom.xml
files and build the Strimzi.
ℹ️
|
Optional - only required when a new release branch is needed — for patch releases, skip this branch creation, and instead re-use the existing minor release branch. |
If you are starting on main branch, create a new branch from the main. For example 2.5.x
.
git checkout -b 2.5.x main
git push upstream 2.5.x
Now from the 2.5.x
branch, make a release of 2.5.0
.
If you are already releasing from a branch skip the above step of creating a new branch and simply checkout that branch.
Releases are performed by modifying the .github/project.yml
file, setting current-version
to the release version and next-version
to the next SNAPSHOT.
Open a pull request with the changed project.yml
to initiate the pre-release workflows.
The pull request must be raised against the main repository, not a fork.
The target of the pull request should be either main
or a release branch (described above).
At this phase, the project milestone will be checked and it will be verified that no issues for the release milestone are still open. Additionally, the project’s integration tests will be run.
Once approved and the pull request is merged, the release action will execute. This action will execute the Maven release plugin to tag the release commit, and build the application artifacts.
If successful, the action will push the new tag to the GitHub repository and generate release notes listing all of the closed issues included in the milestone. Finally, the milestone will be closed.