Replies: 4 comments 1 reply
-
How about loading schema info on demand in which case the first interaction with a database object can be slow. This can important optimization for multi db support. |
Beta Was this translation helpful? Give feedback.
-
Another option may be combining the above two options together:
For multi DB connectivity, we may create separate schema files one by one. Here also one open question is how we can support schema changes for large DB schema. Will it also need optimization? |
Beta Was this translation helpful? Give feedback.
-
@MichalisDBA - Can you please help with a similar large schema for MySQL and PostgreSQL so that we can test and identify this as whether its just Oracle 9i bad SQL issue? |
Beta Was this translation helpful? Give feedback.
-
Closing |
Beta Was this translation helpful? Give feedback.
-
This is a continuation of issue #450
Tests on large Oracle9i databases over 3500 tables and many columns per table are slow and take forever to create schema cache if DB2Rest is the same network subnet. If DB2Rest is in a different region it takes even longer or sometimes can time out.
Investigations revealed this was due to inefficient generic queries used by OJDBC drivers to retrieve table and column information.
Options for optimization
INCLUDE_SCHEMA
environment variable to limit the number of tables and columns loaded into the schema cache.Other options to consider in the future
Reference - https://oracle-base.com/articles/9i/dbms_metadata
Beta Was this translation helpful? Give feedback.
All reactions