-
-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support KTable-KTable Foreign-Key Join #52
Comments
Hi @ybuasen, Exactly, java library offer KTable-KTable Foreign-Key Join. This feature is not available yet. I leave open this issue to track feature implementation progress. Regards, |
@LGouellec Awesome!!! Looking forward to see it in action. |
Hi, is this feature currently available within the Streamiz library? |
For now, this feature is not prioritized. But you have a workaround with |
For streams you can join them by remapping the key and then using the join. I was trying to use ToStream and then remap the key and join . Also I am getting the error using this code (might make sense since changing the keys of tables leads to inconsistencies and loosing data since the key that you would remap to is not unique to records with the same key).
The error we are getting:
|
Hi @MladenTasevski, Could you provide more logs regarding your example please ? All details are welcome :) Regards, |
Hello again @LGouellec Here I am trying to join the order table with the customer and book table.
What more from the logs would you need? Also when I use SelectKey right before the join it also fails. So I only use SelectKey when initializing the stream or after joining the streams. I am not getting any joins on the customerOrders and bookOrders even though I checked there were entries with the same ID in them. This is my current implementation:
|
HI @MladenTasevski , Regarding this log :
It seems that all source topics are not co-partitioned. During a JOIN operation, both topics need to have the same number partitions else the join can't work. I recommend you to read this article, there are more public blogs available also. Best regards, |
@LGouellec I didn't know about the partitions for joining thank you for pointing that out. I've managed to join the KTables, but I am getting a lot of out of order records and I think that makes up for latency in the streams. I'm just joining and using select key. What could be some reasons for the out of order records?
|
@MladenTasevski So your topology is split into two parts. Internal consumer subscribe both topics (source topic and repartition), so you are not guaranteed to consume messages first from the source topic and then from the repartition topic. It is in parallel. So out-of-order records can appears if the timestamp present into the state store is newer compared to the current record. Btw, You can visualize your topology with this tools. streamBuilder.Build().Describe().ToString(); |
@LGouellec Thank you for that explanation and the tool, it helped a lot to understand what's going on. I was getting a lot of the out of order records. I am doing a FK join of 3 tables which all have different PKs. The out of order records appear too often and it's causing performance issues. I don't know if there's something more to try. I maybe have something that I'm not doing correctly also. |
@MladenTasevski In the producer source topics, the timestamp of your messages are setted explicitly or you let the broker do it ? |
@LGouellec I keep the default value, let the broker do it. Haven't messed with the Timestamps Extractor. |
Ok. Can you attach to this issue a dump of all your source topics ? And your topology ? |
Description
The java Kafka Stream library supports KTable-KTable Foreign-Key Join as mentioned at https://kafka.apache.org/27/documentation/streams/developer-guide/dsl-api.html#ktable-ktable-fk-join.
The sample use case of java version is post at https://kafka-tutorials.confluent.io/foreign-key-joins/kstreams.html.
Will the same functionality be implemented in Streamiz Kafka .NET soon?
The text was updated successfully, but these errors were encountered: