Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Irregular errors in driver when multiple statements from one connection used concurrently #115

Open
merzbird opened this issue Apr 16, 2024 · 1 comment

Comments

@merzbird
Copy link

Driver version

2.1.0.26

Redshift version

PostgreSQL 8.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3), Redshift 1.0.63269

Client Operating System

macOS Version 14.3.1

JAVA/JVM version

17.0.3

Problem description

  1. there are more than one concurrent statement originated from the same connection
  2. one statement is in the process of reading rows (on a separate thread via ring buffer)
  3. another statement attempts to close itself

Expected: no errors

Actual: one of the statements fails due to Unexpected packet type error

JDBC trace logs

Apr 16 16:51:23.798 DEBUG  [1 main] com.amazon.redshift.Driver.connect: ===================================
Apr 16 16:51:23.804 FUNCTION  [1 main] com.amazon.redshift.Driver.connect:  Enter (jdbc:redshift://engineering.c2mgwqa9v1kj.us-east-1.redshift.amazonaws.com:5439/dev?allowMultiQueries=true;ssl=false;loginTimeout=0;LogLevel=DEBUG,{ssl=false, logintimeout=0, password=***, dbname=dev, port=5439, loglevel=DEBUG, host=engineering.c2mgwqa9v1kj.us-east-1.redshift.amazonaws.com, user=integration_test_user, iamauth=false}) 
Apr 16 16:51:23.804 DEBUG  [1 main] com.amazon.redshift.Driver.connect: Connecting with URL: jdbc:redshift://engineering.c2mgwqa9v1kj.us-east-1.redshift.amazonaws.com:5439/dev?allowMultiQueries=true;ssl=false;loginTimeout=0;LogLevel=DEBUG
Apr 16 16:51:23.804 DEBUG  [1 main] com.amazon.redshift.Driver.connect: Caller stack[main]: jdbc.Main.main(Main.java:14)
Apr 16 16:51:23.826 DEBUG  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.RedshiftConnectionImpl: Redshift JDBC Driver 2.1.0.26
Apr 16 16:51:23.826 DEBUG  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.RedshiftConnectionImpl: JVM architecture is 64-bit
Apr 16 16:51:23.828 DEBUG  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.setDefaultFetchSize:   setDefaultFetchSize = 0
Apr 16 16:51:23.828 DEBUG  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.setPrepareThreshold:   setPrepareThreshold = 5
Apr 16 16:51:23.837 DEBUG  [1 main] com.amazon.redshift.core.v3.ConnectionFactoryImpl.openConnectionImpl: Trying to establish a protocol version 3 connection to engineering.c2mgwqa9v1kj.us-east-1.redshift.amazonaws.com:5439
Apr 16 16:51:23.840 INFO  [1 main] com.amazon.redshift.core.RedshiftStream.RedshiftStream: hostspec host: engineering.c2mgwqa9v1kj.us-east-1.redshift.amazonaws.com
Apr 16 16:51:23.840 INFO  [1 main] com.amazon.redshift.core.RedshiftStream.RedshiftStream: hostspec port: 5439
Apr 16 16:51:24.063 INFO  [1 main] com.amazon.redshift.core.RedshiftStream.RedshiftStream: address: engineering.c2mgwqa9v1kj.us-east-1.redshift.amazonaws.com/3.234.101.159
Apr 16 16:51:24.063 INFO  [1 main] com.amazon.redshift.core.RedshiftStream.RedshiftStream: port: 5439
Apr 16 16:51:24.064 INFO  [1 main] com.amazon.redshift.core.RedshiftStream.RedshiftStream: hostname: engineering.c2mgwqa9v1kj.us-east-1.redshift.amazonaws.com
Apr 16 16:51:24.064 INFO  [1 main] com.amazon.redshift.core.RedshiftStream.RedshiftStream: hoststring: engineering.c2mgwqa9v1kj.us-east-1.redshift.amazonaws.com
Apr 16 16:51:24.066 INFO  [1 main] com.amazon.redshift.core.RedshiftStream.changeStream: Compression is disabled. Creating regular input stream.
Apr 16 16:51:24.074 INFO  [1 main] com.amazon.redshift.core.RedshiftStream.RedshiftStream: Gets a new stream on a new socket
Apr 16 16:51:24.091 DEBUG  [1 main] com.amazon.redshift.core.v3.ConnectionFactoryImpl.constructNewStream: Receive Buffer Size is 132,480
Apr 16 16:51:24.091 DEBUG  [1 main] com.amazon.redshift.core.v3.ConnectionFactoryImpl.constructNewStream: Send Buffer Size is 132,480
Apr 16 16:51:24.091 INFO  [1 main] com.amazon.redshift.core.v3.ConnectionFactoryImpl.getParametersForStartup: using plugin: null
Apr 16 16:51:24.096 DEBUG  [1 main] com.amazon.redshift.core.v3.ConnectionFactoryImpl.sendStartupPacket:  FE=> StartupPacket(user=integration_test_user, database=dev, client_encoding=UTF8, DateStyle=ISO, TimeZone=Europe/Berlin, extra_float_digits=2, application_name=[main]jdbc.Main.main(Main.java:14), driver_version=Redshift JDBC Driver 2.1.0.26, os_version=Mac OS X 14.3.1 x86_64, client_protocol_version=3)
Apr 16 16:51:24.098 INFO  [1 main] com.amazon.redshift.core.RedshiftStream.changeStream: Compression is disabled. Creating regular input stream.
Apr 16 16:51:24.202 DEBUG  [1 main] com.amazon.redshift.core.v3.ConnectionFactoryImpl.doAuthentication:  <=BE AuthenticationReqMD5
Apr 16 16:51:24.205 DEBUG  [1 main] com.amazon.redshift.core.v3.ConnectionFactoryImpl.doAuthentication:  FE=> Password(md5digest)
Apr 16 16:51:24.319 DEBUG  [1 main] com.amazon.redshift.core.v3.ConnectionFactoryImpl.doAuthentication:  <=BE AuthenticationOk
Apr 16 16:51:24.355 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(application_name = [main]jdbc.Main.main(Main.java:14))
Apr 16 16:51:24.355 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(case_sensitive = on)
Apr 16 16:51:24.356 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(client_encoding = UTF8)
Apr 16 16:51:24.356 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(datestyle = ISO, MDY)
Apr 16 16:51:24.357 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(integer_datetimes = on)
Apr 16 16:51:24.357 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(intervalstyle = postgres)
Apr 16 16:51:24.357 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(is_superuser = off)
Apr 16 16:51:24.357 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(server_encoding = UNICODE)
Apr 16 16:51:24.358 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(server_version = 8.0.2)
Apr 16 16:51:24.358 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(session_authorization = integration_test_user)
Apr 16 16:51:24.358 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(TimeZone = Europe/Berlin)
Apr 16 16:51:24.365 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(timezone_abbreviations = Default)
Apr 16 16:51:24.366 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(padb_version = 1.0.63269)
Apr 16 16:51:24.366 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(padb_revision = 17187a5367)
Apr 16 16:51:24.366 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(max_numeric_precision = 38)
Apr 16 16:51:24.366 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(max_varchar_size = 65535)
Apr 16 16:51:24.366 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(datashare_enabled = off)
Apr 16 16:51:24.367 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(external_database = off)
Apr 16 16:51:24.367 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(session_id = a6b5e48a-109a-4cfc-9627-d9a0d2b30816)
Apr 16 16:51:24.367 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveParameterStatus:  <=BE ParameterStatus(server_protocol_version = 1)
Apr 16 16:51:24.367 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.readStartupMessages:  <=BE BackendKeyData(pid=1,073,857,821,ckey=1,110,816,411)
Apr 16 16:51:24.367 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveRFQ:  <=BE ReadyForQuery(I)
Apr 16 16:51:24.372 DEBUG  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.RedshiftConnectionImpl:     types using binary send = TIMESTAMPTZ,INTERVALY2M,INTERVALD2S,UUID,INT2_ARRAY,INT4_ARRAY,BYTEA,TEXT_ARRAY,TIMETZ,INT8,INT2,INT4,VARCHAR_ARRAY,INT8_ARRAY,POINT,TIMESTAMP,TIME,BOX,FLOAT4,FLOAT8,FLOAT4_ARRAY,FLOAT8_ARRAY
Apr 16 16:51:24.372 DEBUG  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.RedshiftConnectionImpl:     types using binary receive = TIMESTAMPTZ,INTERVALY2M,INTERVALD2S,UUID,INT2_ARRAY,INT4_ARRAY,BYTEA,TEXT_ARRAY,TIMETZ,INT8,INT2,INT4,VARCHAR_ARRAY,INT8_ARRAY,POINT,DATE,TIMESTAMP,TIME,BOX,FLOAT4,FLOAT8,FLOAT4_ARRAY,FLOAT8_ARRAY
Apr 16 16:51:24.372 DEBUG  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.RedshiftConnectionImpl:     integer date/time = true
Apr 16 16:51:24.382 FUNCTION  [1 main] com.amazon.redshift.Driver.connect:  Return com.amazon.redshift.jdbc.RedshiftConnectionImpl@29ca901e 
Apr 16 16:51:24.382 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Enter () 
Apr 16 16:51:24.382 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Enter (1003,1007) 
Apr 16 16:51:24.382 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Enter (1003,1007,2) 
Apr 16 16:51:24.388 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftStatementImpl.setFetchSize:  Enter (0) 
Apr 16 16:51:24.388 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftStatementImpl.setFetchSize:  Return  
Apr 16 16:51:24.388 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Return com.amazon.redshift.jdbc.RedshiftStatementImpl@71318ec4 
Apr 16 16:51:24.388 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Return com.amazon.redshift.jdbc.RedshiftStatementImpl@71318ec4 
Apr 16 16:51:24.388 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Return com.amazon.redshift.jdbc.RedshiftStatementImpl@71318ec4 
Apr 16 16:51:24.395 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftStatementImpl.execute:  Enter (SELECT 42 AS age, 180 AS height
UNION ALL
SELECT 50, 170
ORDER BY age
LIMIT 1001
) 
Apr 16 16:51:24.397 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.getAutoCommit:  Enter () 
Apr 16 16:51:24.397 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.getAutoCommit:  Return true 
Apr 16 16:51:24.398 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.execute:   simple execute, handler=com.amazon.redshift.jdbc.RedshiftStatementImpl$StatementResultHandler@6fc6f14e, maxRows=0, fetchSize=0, flags=17
Apr 16 16:51:24.399 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendOneQuery:  FE=> OneQuery(rows="0")
Apr 16 16:51:24.401 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendParse:  FE=> Parse(stmt=null,query="SELECT 42 AS age, 180 AS height
UNION ALL
SELECT 50, 170
ORDER BY age
LIMIT 1001
",oids={})
Apr 16 16:51:24.401 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendBind:  FE=> Bind(stmt=null,portal=null)
Apr 16 16:51:24.401 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendDescribePortal:  FE=> Describe(portal=null)
Apr 16 16:51:24.401 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendExecute:  FE=> Execute(portal=null,limit=0)
Apr 16 16:51:24.402 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendFlush:  FE=> Flush
Apr 16 16:51:24.402 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendSync:  FE=> Sync
Apr 16 16:51:24.403 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:   useRingBuffer=true, handler.wantsScrollableResultSet()=false, subQueries=false, bothRowsAndStatus=false
Apr 16 16:51:24.510 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  FE=> Received packet of type:49=1
Apr 16 16:51:24.511 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  <=BE ParseComplete [null]
Apr 16 16:51:24.511 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  FE=> Received packet of type:50=2
Apr 16 16:51:24.511 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  <=BE BindComplete [unnamed]
Apr 16 16:51:24.511 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  FE=> Received packet of type:84=T
Apr 16 16:51:24.513 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveFields:  <=BE RowDescription(2)
Apr 16 16:51:24.514 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveFields:         Field(age,INT4,4,T)
Apr 16 16:51:24.514 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveFields:         Field(height,INT4,4,T)
Apr 16 16:51:24.516 DEBUG  [1 main] com.amazon.redshift.core.v3.RedshiftRowsBlockingQueue.init: init(): limitByBufSize=true , totalFetchRingBufferSize=0, fetchRingBufferSizeCapacity = 1,000,000,000, fetchSize = 0
Apr 16 16:51:24.525 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftResultSet.setFetchSize:  Enter (0) 
Apr 16 16:51:24.525 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftResultSet.setFetchDirection:  Enter (1000) 
Apr 16 16:51:24.525 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  <=BE DataRow(len=5)
Apr 16 16:51:24.525 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread: DataRow exiting the message loop on the application thread
Apr 16 16:51:24.526 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftStatementImpl.execute:  Return  
Apr 16 16:51:24.525 DEBUG  [15 RingBufferThread] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:   useRingBuffer=true, handler.wantsScrollableResultSet()=false, subQueries=false, bothRowsAndStatus=false
Apr 16 16:51:24.527 DEBUG  [15 RingBufferThread] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  <=BE DataRow(len=5)
Apr 16 16:51:24.527 DEBUG  [15 RingBufferThread] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  FE=> Received packet of type:67=C
Apr 16 16:51:24.527 DEBUG  [15 RingBufferThread] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveCommandStatus:  <=BE CommandStatus(SELECT)
Apr 16 16:51:24.528 DEBUG  [15 RingBufferThread] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  FE=> Received packet of type:90=Z
Apr 16 16:51:24.529 DEBUG  [15 RingBufferThread] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveRFQ:  <=BE ReadyForQuery(I)
Apr 16 16:51:24.530 DEBUG  [15 RingBufferThread] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread: ReadyForQuery will exit from processResultsOnThread loop
Apr 16 16:51:24.541 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftResultSet.close:  Enter () 
Apr 16 16:51:24.541 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftResultSet.close:  Return  
Apr 16 16:51:24.547 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Enter () 
Apr 16 16:51:24.547 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Enter (1003,1007) 
Apr 16 16:51:24.547 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Enter (1003,1007,2) 
Apr 16 16:51:24.547 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftStatementImpl.setFetchSize:  Enter (0) 
Apr 16 16:51:24.547 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftStatementImpl.setFetchSize:  Return  
Apr 16 16:51:24.547 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Return com.amazon.redshift.jdbc.RedshiftStatementImpl@47ef968d 
Apr 16 16:51:24.547 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Return com.amazon.redshift.jdbc.RedshiftStatementImpl@47ef968d 
Apr 16 16:51:24.548 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.createStatement:  Return com.amazon.redshift.jdbc.RedshiftStatementImpl@47ef968d 
Apr 16 16:51:24.548 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftStatementImpl.execute:  Enter (SELECT 42 AS age, 180 AS height
UNION ALL
SELECT 50, 170
ORDER BY age
LIMIT 1001
) 
Apr 16 16:51:24.548 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.getAutoCommit:  Enter () 
Apr 16 16:51:24.548 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftConnectionImpl.getAutoCommit:  Return true 
Apr 16 16:51:24.548 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.execute:   simple execute, handler=com.amazon.redshift.jdbc.RedshiftStatementImpl$StatementResultHandler@23e028a9, maxRows=0, fetchSize=0, flags=17
Apr 16 16:51:24.548 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendOneQuery:  FE=> OneQuery(rows="0")
Apr 16 16:51:24.549 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendParse:  FE=> Parse(stmt=null,query="SELECT 42 AS age, 180 AS height
UNION ALL
SELECT 50, 170
ORDER BY age
LIMIT 1001
",oids={})
Apr 16 16:51:24.549 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendBind:  FE=> Bind(stmt=null,portal=null)
Apr 16 16:51:24.549 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendDescribePortal:  FE=> Describe(portal=null)
Apr 16 16:51:24.549 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendExecute:  FE=> Execute(portal=null,limit=0)
Apr 16 16:51:24.549 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendFlush:  FE=> Flush
Apr 16 16:51:24.550 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.sendSync:  FE=> Sync
Apr 16 16:51:24.550 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:   useRingBuffer=true, handler.wantsScrollableResultSet()=false, subQueries=false, bothRowsAndStatus=false
Apr 16 16:51:24.647 FUNCTION  [16 ForkJoinPool.commonPool-worker-1] com.amazon.redshift.jdbc.RedshiftStatementImpl.close:  Enter () 
Apr 16 16:51:24.648 FUNCTION  [16 ForkJoinPool.commonPool-worker-1] com.amazon.redshift.jdbc.RedshiftStatementImpl.cancel:  Enter () 
Apr 16 16:51:24.648 ERROR  [16 ForkJoinPool.commonPool-worker-1] com.amazon.redshift.jdbc.RedshiftStatementImpl.cancel: statementState is StatementCancelState.IDLE
Apr 16 16:51:24.648 FUNCTION  [16 ForkJoinPool.commonPool-worker-1] com.amazon.redshift.jdbc.RedshiftStatementImpl.cancel:  Return  
Apr 16 16:51:24.656 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  FE=> Received packet of type:49=1
Apr 16 16:51:24.656 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  <=BE ParseComplete [null]
Apr 16 16:51:24.656 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  FE=> Received packet of type:50=2
Apr 16 16:51:24.657 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  <=BE BindComplete [unnamed]
Apr 16 16:51:24.658 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  FE=> Received packet of type:84=T
Apr 16 16:51:24.658 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveFields:  <=BE RowDescription(2)
Apr 16 16:51:24.659 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveFields:         Field(age,INT4,4,T)
Apr 16 16:51:24.659 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.receiveFields:         Field(height,INT4,4,T)
Apr 16 16:51:24.659 DEBUG  [1 main] com.amazon.redshift.core.v3.RedshiftRowsBlockingQueue.init: init(): limitByBufSize=true , totalFetchRingBufferSize=0, fetchRingBufferSizeCapacity = 1,000,000,000, fetchSize = 0
Apr 16 16:51:24.659 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftResultSet.setFetchSize:  Enter (0) 
Apr 16 16:51:24.659 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftResultSet.setFetchDirection:  Enter (1000) 
Apr 16 16:51:24.660 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  <=BE DataRow(len=5)
Apr 16 16:51:24.660 DEBUG  [1 main] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread: DataRow exiting the message loop on the application thread
Apr 16 16:51:24.660 FUNCTION  [1 main] com.amazon.redshift.jdbc.RedshiftStatementImpl.execute:  Return  
Apr 16 16:51:24.660 DEBUG  [17 RingBufferThread] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:   useRingBuffer=true, handler.wantsScrollableResultSet()=false, subQueries=false, bothRowsAndStatus=false
Apr 16 16:51:24.660 DEBUG  [16 ForkJoinPool.commonPool-worker-1] com.amazon.redshift.core.v3.QueryExecutorImpl.sendFlush:  FE=> Flush
Apr 16 16:51:24.660 DEBUG  [17 RingBufferThread] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  <=BE DataRow(len=5)
Apr 16 16:51:24.662 DEBUG  [16 ForkJoinPool.commonPool-worker-1] com.amazon.redshift.core.v3.QueryExecutorImpl.sendSync:  FE=> Sync
Apr 16 16:51:24.662 DEBUG  [17 RingBufferThread] com.amazon.redshift.core.v3.QueryExecutorImpl.processResultsOnThread:  FE=> Received packet of type:67=C
Apr 16 16:51:24.663 ERROR  [16 ForkJoinPool.commonPool-worker-1] com.amazon.redshift.core.v3.QueryExecutorImpl.closeStatementAndPortal: java.io.IOException: Unexpected packet type: 0
	at com.amazon.redshift.core.v3.QueryExecutorImpl.processSyncOnClose(QueryExecutorImpl.java:1867)
	at com.amazon.redshift.core.v3.QueryExecutorImpl.closeStatementAndPortal(QueryExecutorImpl.java:1741)
	at com.amazon.redshift.jdbc.RedshiftStatementImpl.closeImpl(RedshiftStatementImpl.java:770)
	at com.amazon.redshift.jdbc.RedshiftStatementImpl.close(RedshiftStatementImpl.java:757)
	at jdbc.Main.lambda$createStmtExecuteAndClose$0(Main.java:53)
	at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)
	at java.base/java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1796)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)

Apr 16 16:51:24.663 FUNCTION  [16 ForkJoinPool.commonPool-worker-1] com.amazon.redshift.jdbc.RedshiftStatementImpl.close:  Return  

Reproduction code

This code demonstrates the problem, but it may not always occur due to timing variations.

public class Main {

    public static void main(String[] args) throws SQLException, InterruptedException {
        log("||||| main");
        var conn = createConnection();
        try {
            createStmtExecuteAndClose(conn, "first");
        } catch (SQLException ex) {
            log("error in first");
            ex.printStackTrace();
        }
        try {
            createStmtExecuteAndClose(conn, "second");
        } catch (SQLException ex) {
            log("error in second");
            ex.printStackTrace();
        }
        log("main: sleep");
        sleep(500);
        try {
            conn.close();
        } catch (SQLException ex) {
            log("conn close\n" + ex.getMessage());
        }
        log("||||| main: end");
    }

    private static void createStmtExecuteAndClose(Connection conn, String label) throws SQLException {
        var statement = conn.createStatement();
        log(label + ": create statement");
        statement.execute(simpleQuery);
        try (var rs = statement.getResultSet()) {
            while (rs.next()) {
                sleep(5);
                log(label + ": read row");
            }
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
        CompletableFuture.runAsync(() -> {
            try {
                sleep(100);
                log(label + ": close statement");
                statement.close();
            } catch (SQLException ex) {
                log(ex.getMessage());
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        });
    }

    private static final String simpleQuery = """
        SELECT 42 AS age, 180 AS height
        UNION ALL
        SELECT 50, 170
        ORDER BY age
        LIMIT 1001
        """;

    private static void log(String x) {
        System.out.printf("[%d][tid=%d] %s%n", System.currentTimeMillis() % 60000, Thread.currentThread().getId(), x);
    }

}
@serge-rider
Copy link

The same issue here.
The issue started after 2.1.0.10 and getting worse with every new release.

It looks like driver not longer sync on connection and concurrent calls of any SQL query execution breaks native protocol.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants