You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are still seeing some instances of this message in the ContextualServicesSender Dataflow job in production (example):
"*~*~*~ Channel ManagedChannelImpl{logId=59, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true."
We previously were having OOM issues that we believe were related to this problem. We are not seeing that yet on 2.31.0 so perhaps the severity of the problem is significantly reduced. The errors seem to come in spurts, with about 3 messages emitted and then hours of silence.
Of note, I previously thought that this error was indicating that Beam was using the new Storage Write API for all streaming inserts. That's not true. There's a new separate WriteMethod.STORAGE_WRITE_API that has to be chosen to use the write api.
But! The DatasetServicesImpl provisions a newWriteClient regardless of whether the Write API is being used. The error message is related to the write client not being shut down properly, but our pipeline is not set up to actually try to send data to BQ using the Write API.
We are still seeing some instances of this message in the ContextualServicesSender Dataflow job in production (example):
We had previously downgraded to Beam 2.28.0 to avoid this issue, but it was supposed to be fixed in 2.31.0. See https://issues.apache.org/jira/browse/BEAM-12356
We previously were having OOM issues that we believe were related to this problem. We are not seeing that yet on 2.31.0 so perhaps the severity of the problem is significantly reduced. The errors seem to come in spurts, with about 3 messages emitted and then hours of silence.
cc @BenWu
The text was updated successfully, but these errors were encountered: