Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to connect to a secured Kafka cluster (SASL) #9

Closed
shaaimin opened this issue Jul 22, 2019 · 10 comments
Closed

Is it possible to connect to a secured Kafka cluster (SASL) #9

shaaimin opened this issue Jul 22, 2019 · 10 comments
Labels
question Further information is requested

Comments

@shaaimin
Copy link
Contributor

Is it possible to connect to a secured Kafka cluster (SASL) with client authentication?
If can, could you please share the configuration sample?

@shaaimin shaaimin changed the title Is it possible to connect to a secured Kafka cluster (SASL) with client authentication Is it possible to connect to a secured Kafka cluster (SASL) Jul 22, 2019
@ekoutanov
Copy link
Member

The fork that Kafdrop 3.x originated from had added SASL support. Please take a look at the author's description on how to use that feature.

@ekoutanov ekoutanov added the question Further information is requested label Jul 24, 2019
@shaaimin
Copy link
Contributor Author

Got it, thank you for your quick reply. I will refer that page.
Thank you.

@ekoutanov
Copy link
Member

@shaaimin would you mind posting back on this thread to confirm that you got it to work. I've never tried the SASL feature; would be keen to know if it works as advertised.

@shaaimin
Copy link
Contributor Author

shaaimin commented Aug 13, 2019

@ekoutanov sorry for late reply.

sure, sasl_ssl works for me.
but the following exception occurred.
2019-08-13 11:26:30.352 INFO 4064 [| adminclient-1] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=adminclient-1] Metadata update failed
org.apache.kafka.common.errors.DisconnectException: Cancelled fetchMetadata request with correlation id 203 due to node -2 being disconnected

i created file(kaas_stg_jaas.conf) for authentication.
and then run
java -jar kafdrop-3.8.0.jar --zookeeper.connect=xxx.xxx.xxx.xxx:2181 --kafka.brokerConnect=xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092 --kafka.isSecured=true --kafka.env=stg --server.servlet.context-path="/kafdrop" --user.dir="xxxxx"

user.dir is the folder path which kaas_stg_jaas.conf was stored.

kaas_stg_jaas.conf.txt

@shaaimin
Copy link
Contributor Author

shaaimin commented Aug 13, 2019

@ekoutanov
i found that the following setting are also need for AdminClient.
KafkaHighLevelAdminClient.java method init
if (kafkaConfiguration.getIsSecured()) {
props.put("security.protocol", "SASL_PLAINTEXT");
props.put("sasl.mechanism", "PLAIN");

props.put("sasl.jaas.config",
        "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"sync\" password=\"sync#secret\";");

}

after add those setting on my local, DisconnectException was resolved.

another issue is host and port in BrokerVO which was got from kafkaMonitor.getBrokers() are null.
is there any configuration missing?

when i used command bin/zookeeper-shell.sh localhost:2181 <<< "get /brokers/ids/1",
i got the following information, host is null..
{"listener_security_protocol_map":{"SASL_PLAINTEXT":"SASL_PLAINTEXT"},"endpoints":["SASL_PLAINTEXT://xxx.xxx.xxx.xxx:9091"],"jmx_port":9991,"host":null,"timestamp":"1565681556905","port":-1,"version":4}
cZxid = 0x6e000008d2
ctime = Tue Aug 13 16:32:36 JST 2019
mZxid = 0x6e000008d2
mtime = Tue Aug 13 16:32:36 JST 2019
pZxid = 0x6e000008d2
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x1011b1b799e000c
dataLength = 217
numChildren = 0

also i found a similar case as below.
Yelp/kafka-utils#157

@ekoutanov
Copy link
Member

@shaaimin I've done a bunch of changes to support SASL and TLS, described here. It provides a lot more flexibility than the current approach. The old way will still work, but should be considered deprecated. Ideally you should try to get off it as soon as possible.

@shaaimin
Copy link
Contributor Author

shaaimin commented Sep 27, 2019

@ekoutanov that's great, i had tried the new approach using kafka.properties, it works well.

@adivardhan
Copy link

@ekoutanov that's great, i had tried the new approach using kafka.properties, it works well.

@shaaimin Could you please provide an example of all the necessary configuration, you've done to make it work?

@shaaimin
Copy link
Contributor Author

@adivardhan sure, the following is the configuration of "kafka.properties" on my side.
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="xxxx" password="xxx";

and i used the following to run.
java -jar kafdrop-3.9.0.jar --zookeeper.connect=server1:2181,server2:2181,server3:2181 --kafka.brokerConnect=server1:9092,server2:9092,server3:9092 --kafka.properties="/usr/local/kafdrop/kafka.properties" --server.servlet.context-path="/kafdrop" > kafdrop.log &

@adivardhan
Copy link

@adivardhan sure, the following is the configuration of "kafka.properties" on my side.
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="xxxx" password="xxx";

and i used the following to run.
java -jar kafdrop-3.9.0.jar --zookeeper.connect=server1:2181,server2:2181,server3:2181 --kafka.brokerConnect=server1:9092,server2:9092,server3:9092 --kafka.properties="/usr/local/kafdrop/kafka.properties" --server.servlet.context-path="/kafdrop" > kafdrop.log &

Thanks. Seems you're not using SSL for encryption.
Probably couple of files (key & trust) and property will be needed for that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants