You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And if debugging (the S3 Connector, specifically), we see that the data that's needed to generate the top level folder is available, but the storage writer cannot access it from the map.
There is a HashMap with the original topic name (SQLSERVER_TEST_TABLE_TEST-0), and the transform has already been applied (TABLE-TEST-0), so if we lookup the "new" topicname, it cannot find the S3 writer for the TopicPartition.
I think adding a separate config in the storage-common module for performing the logic of the RegexRouter outside of the SMT pipeline will help solve this problem, and can be patched into the Hadoop, S3, and other storage connectors
The text was updated successfully, but these errors were encountered:
Will do, I run those whole thing by using docker directly and EKS (kubernetes). Now I need to compile the project and set up the local environment, it may take some time 😄
Naturally, one might try to use
RegexRouter
to send multiple topics to a single directory. Say, data coming from JDBC Source connectorBut this will throw a NPE
And if debugging (the S3 Connector, specifically), we see that the data that's needed to generate the top level folder is available, but the storage writer cannot access it from the map.
There is a HashMap with the original topic name (
SQLSERVER_TEST_TABLE_TEST-0
), and the transform has already been applied (TABLE-TEST-0
), so if we lookup the "new" topicname, it cannot find the S3 writer for theTopicPartition
.I think adding a separate config in the
storage-common
module for performing the logic of theRegexRouter
outside of the SMT pipeline will help solve this problem, and can be patched into the Hadoop, S3, and other storage connectorsThe text was updated successfully, but these errors were encountered: