Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync 6.26.0 changes to develop branch #656

Merged
merged 102 commits into from
Dec 7, 2023
Merged

Sync 6.26.0 changes to develop branch #656

merged 102 commits into from
Dec 7, 2023

Conversation

Stolb27
Copy link
Collaborator

@Stolb27 Stolb27 commented Dec 5, 2023

No description provided.

mperezfuster and others added 30 commits September 11, 2023 15:27
* Docs: new two gucs for timestamp9 module

* Update gpdb-doc/markdown/ref_guide/modules/timestamp9.html.md

Co-authored-by: Xing Guo <[email protected]>

---------

Co-authored-by: David Yozie <[email protected]>
Co-authored-by: Xing Guo <[email protected]>
when os_type is ubuntu20.04, the
reduced-frequency-trigger-start-[[ os_type ]]
reduced-frequency-trigger-stop-[[ os_type ]]
will become
reduced-frequency-trigger-start-ubuntu20.04
reduced-frequency-trigger-stop-ubuntu20.04

For concourse, when there is a dot in var's name, it will look for
the field 04 for var reduced-frequency-trigger-start-ubuntu20
and filed 04 for var reduced-frequency-trigger-stop-ubuntu20
refer: https://concourse-ci.org/vars.html#var-syntax

To workaround this, add double quote to the var

[GPR-1532]

Authored-by: Shaoqi Bai <[email protected]>
* Added documentation for gpcheckperf option

* Changed buffer-size default size to 8KB
* docs - use resource groups to limit greenplum_fdw concurrency

* review edits requested

* below -> above
`SendDummyPacket` eventually calls `getaddrinfo` (which is a reentrant),
however, `getaddrinfo` is not an async-signal-safe function.
`getaddrinfo` internally calls `malloc`, which is strongly advised to
not do within a signal handler as it may cause deadlocks.
Cache the accepted socket information for the listener, so that it can
be reused in `SendDummyPacket()`.

The purpose of `SendDummyPacket` is to exit more quickly; it
circumvents the polling that happens, which eventually times out after 250ms.

Without `SendDummyPacket()`, there will be multiple test failures since
some tests expects the backend connection to terminate almost
immediately.

To view all the async-signal-safe functions, please view
the signal-safety(7) — Linux manual page.

Reviewed-by: Soumyadeep Chakraborty <[email protected]>
Reviewed-by: Andrew Repp <[email protected]>

This commit is inspired by 91a9a57eb131e21c96cccbac16f0a5ab024e2215.
This is not a direct cherry-pick as there were conflicts, so I did most
of the changes manually.
Problem: Wrong results generated for subquery in projection list for replicated
tables.

Analysis: To derive distribution for any join operator,
CPhysicalJoin::PdsDerive() is invoked. For deriving distribution it checks
DistributionSpec for outer and inner children. when we have DistributionSpec for
outer child as replicate and inner child  as universal then we return universal
as derived distribution. Eventually "Gather Motion" is not created and as data
is not there with coordinator so it gives no rows as output.

Backport of https://github.com/greenplum-db/gpdb/commit/5c36a44ab03c43e82ef08006ad6021773c6176b6
* Docs: add configuration parameter work_mem

* Small copyedit

---------

Co-authored-by: David Yozie <[email protected]>
…6.25) (#16389)

Fixes - https://github.com/greenplum-db/gpdb/issues/16356

* Fix gpcheckperf discrepancy in network test results (v6.23 vs v6.25)

Problem - gpcheckperf provides different results when executing a sequential
network test with a specified hostfile between versions 6.23 and 6.25.

RCA - The problem arises from the getHostList function, which assigns GV.opt['-h']
with a list of hosts because of which the code interprets that the -h option is set
with the host list.As GV.opt['-h'] is already assigned during network testing, the
code incorrectly combines hosts from both the GV.opt['-f'] option and GV.opt['-h'],
when it should exclusively retrieve hosts from either GV.opt['-f'] or GV.opt['-h'].
It leads to redundant host list data, resulting in the observed issue

Solution - Omit the assignment of the global variable GV.opt['-h'] within the "getHostList"
function. Instead, utilize a function call to retrieve the host list when it is required in
other parts of the code.

Test - Added behave test case for the fix.
…set. (#16433) (#16478)

Issue: Following are the gpMgmt utilities that do not honor the -d flag, when
MASTER_DATA_DIRECTORY is not set.
1. gpstart
2. gpstop
3. gpstate
4. gprecoverseg
5. gpaddmirror

RCA: to get the master data directory gp.getmaster_datadir()
function is called from the above-listed utilities. the function does not
have any provision to return the master data directory which is
provided with the -d flag. currently, it looks for MASTER_DATA_DIRECTORY
and MASTER_DATA_DIRECTORY env variable.

    also in some of the utilities we were creating lock files before
parsing the provided options which looks like the design flow that was
causing the utilities to crash when looking for master data
directory.

Fix: Added a global flag which holds the data directory provided with -d
option. so when we run the utility and do parsing it sets the flag with
the provided datadirectory and the same will be returned when we call
gp.gp.getmaster_datadir().

Test:
Added behave test cases to use the provided data directory if MASTER_DATA_DIRECTORY
is not set.
Added behave test case to check if the provided master data directory
is preferred over the already set master_data_dir env variable.
it is tested by setting a wrong MASTER_DATA_DIRECTORY env variable
and when we run the utility with the correct data directory using the -d option then
the utility should execute successfully.
This is the backport of #16465, to fix the issue #16447.

The resolution is quite simple and direct: if an AO materialized view has indexes, create the block directory
for it.
…ctness during OS upgrade (#16367)

This is the backport of #16333

During OS upgrades, such as an upgrade from CentOS 7 to CentOS 8, there could be some
locale changes happen that lead to the data distribution or data partition position change.

In order to detect it, we add a new GUC of gp_detect_data_correctness, if it sets to on,
we will not insert data actually, we just check whether the data belongs to this segment or
this partition table or not
Backport from 6bbafd9.

For 6X, we create a new extension called "gp_check_functions"
instead of burn the function/views in to gp_toolkit. The main reason
is to avoid requiring existing 6X users to reinstall gp_toolkit which
might have many objects depending on. Correspondingly, the views will
be created under the default namespace. To use:

```
create extension if not exists gp_check_missing_orphaned_files

-- checking non-extended files
select * from gp_check_missing_files;
select * from gp_check_orphaned_files;

-- checking all data files including the extended data files
-- (e.g. 12345.1, 99999.2). These do not count supporting files
-- such as .fsm .vm etc. And currently we only support checking
-- extended data files for AO/CO tables, not heap.
select * from gp_check_missing_files_ext;
select * from gp_check_orphaned_files_ext;
```

Other adjustments:

* In 6X, external, foreign and virtual tables could have valid
relfilenode but they do not have datafiles stored in the common
tablespaces. Skipping them by checking s.relstorage.

* In 6X, it is known that extended datafiles created and truncated
in the same transaction won't be removed (see #15342). Unfortunately,
our orphaned file checking scripts could not differentiate such a
false alarm case with other cases (but it is debatable whether such
datafiles should really be counted as false alarm). So in order to
not mess up with the test, now we drop the tables in test truncate_gp
so that those datafiles will be removed.

* We create function get_tablespace_version_directory_name() in the
new extension. With that, we remove the same defition in regress test
and adjust the tests accordingly.

Original commit message:

1. Add views to get "existing" relation files in the database,
  including the default, global and user tablespaces. Note that
  this won't expose files outside of the data directory as we
  only use pg_ls_dir to get the file list, which won't have
  reach to any files outside the data directories (including
  the user tablespace directory).

2. Add views to get "expected" relation files in the database,
  using the knowledge from the catalog.

3. Using 1 and 2, construct views to get the missing files (i.e.
  files that are expected but not existed) and orphaned files (i.e.
  files that are there unexpectedly).

4. Create views to run the above views in MPP. Also, we
  support checking extended data files for AO/CO tables

5. Add regress tests.

To use:
```
  -- checking non-extended files
  select * from gp_toolkit.gp_check_missing_files;
  select * from gp_toolkit.gp_check_orphaned_files;

  -- checking all data files including the extended data files
  -- (e.g. 12345.1, 99999.2). These do not count supporting files
  -- such as .fsm .vm etc. And currently we only support checking
  -- extended data files for AO/CO tables, not heap.
  select * from gp_toolkit.gp_check_missing_files_ext;
  select * from gp_toolkit.gp_check_orphaned_files_ext;
```

Note:
* As mentioned, currently support checking all the non-extended data
  files and the extended data files of AO/CO tables. The main reason
  to separate these two is performance: constructing expected file
  list for AO/CO segments runs dynamic SQL on each aoseg/aocsseg table
  and could be slow. So only do that if really required.
* For heap tables, currently have no way to get the expected number
  of datafiles for a certain table: we cannot use pg_relation_size
  because that is in turn dependent on the number of datafiels itself.
  So always skip its extended files for now.
The __gp_aoseg/__gp_aocsseg functions provide more details such as the eof
of segments. Use them for the check missing/orphaned file views, and make two
changes:

* For checking missing files, ignore those w/ eof<=0. They might be recycled
but their aoseg/aocsseg entries are still there.

* For checking orphaned files, ignore those that still have base file (w/o any
extension number) being present. Those might be the ones that have been
truncated  but not yet removed. They might be the ones that are left behind
when column is rewritten during ALTER COLUMN. Now the checking logic becomes:
only if the base file is orphaned too, we will report all the extensions along
with it.

Also run the regress test at the end of the schedule for more chance to catch
abnormalies.
This is to incorporate the stability improvement changes we made for the 7X
views in #15480. Mainly three things were done in that PR:

1. Do not count AO/CO file segments with eof=0 as missing;
2. Do not count files for views as missing;
3. Do not count extended file segments as orphaned as long as its base refile
   is expected.

The 6X views already count the first and second points. Now just make them
more aligned with how 7X does it. The third point is not in 6X, include it.

This doesn't bump the extension version because we haven't released it yet.
6X backport of #16428. Mostly a clean merge except that in 6X the views are
in gp_check_functions instead of gp_toolkit. Another difference is that 6X does
not have cluster-wide gp_stat_activity and also pg_stat_activity does not show
background backends (so we do not have the need to check 'backend_type'). So
adjust accordingly.

Original commit message:

This commit mainly improves the gp_check_orphaned_files view in the sense that,
since ultimately its user is likely going to remove the reported orphaned files,
we would not like that to cause any potential issues due to causes like:

* Main relation files that are associated with dropped tables are kept until the
  next CHECKPOINT in order to prevent potential issue with crash recovery (see
  comments for mdunlink()). So removing them would have issue.
* Relation files that are created during an ongoing transactions could be
  recognized as orphaned: another session will see an old pg_class.relfilenode
  so it would think the new relfilenode is orphaned. So if one removes that, we
  might have data loss.

So accordingly, the improvements are:
* We should force a CHECKPOINT prior to collecting the orphaned file list.
* We should exclude other activities that might change pg_class.relfilenode
  while running the view. This is done by locking pg_class in SHARE mode (which
  blocks writes but allows read) with "nowait" flag (which allows the lock
  attempt to immediately return so we are not blocked forever). We should also
  check pg_stat_activity to make sure that there is no idle transaction (because
  the idle transaction might've already modified pg_class and released the
  lock). In the new view we do that by simply making sure there's no concurrent
  client sessions.

These steps will need to be written in a function. So the rewrite the
gp_check_orphaned_files view to be SELECT'ing from a new UDF.

Also improve the view results by adding relative path of each file being
reported for convenience about further action of the files.

For the test, adjusted a few places so that the new changes won't cause flakiness.
Backported from GPDB7:
greenplum-db/gpdb@a93ab09
with the following changes:
 - In the TAP test framework added the `standby` parameter
   for the enable_restoring function to distinguish
   between standby and recovery which in GPDB7 is done
   via .signal files.
 - In the TAP test put recovery_target_name into recovery.conf
   file instead of postgresql.conf
 - In the TAP test added variables declaration.

Original GPDB7 commit message:

Backported from upstream with change in the test:
diff:
- run_log(['pg_ctl', '-D', $node_standby->data_dir,
-		 '-l', $node_standby->logfile, 'start']);
+ run_log(['pg_ctl', '-D', $node_standby->data_dir, '-l',
		$node_standby->logfile, '-o', "-c gp_role=utility --gp_dbid=$node_standby->{_dbid} --gp_contentid=0 -c maintenance_mode=on",
			'start']);

Original Postgres commit on REL_13_BETA1:
postgres/postgres@dc78866

Original Postgres commit message:

Before, if a recovery target is configured, but the archive ended
before the target was reached, recovery would end and the server would
promote without further notice.  That was deemed to be pretty wrong.
With this change, if the recovery target is not reached, it is a fatal
error.

Based-on-patch-by: Leif Gunnar Erlandsen <[email protected]>
Reviewed-by: Kyotaro Horiguchi <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
Backported from Postgres REL_13_BETA1
postgres/postgres@8961355

Original commit message:

Buildfarm member chipmunk has failed twice due to taking >30s, and
twenty-four runs of other members have used >5s.  The test is new in
v13, so no back-patch.
When memory usage have reached Vmem limit or resource group limit, it
will loop in gp_malloc and gp_failed_to_alloc if new allocation happens,
and then errors out with "ERRORDATA_STACK_SIZE exceeded".

We are therefore printing the log message header using write_stderr.

(cherry picked from commit a7210a4)
When curl with empty header to https://www.bing.com/, the response size used
to be more than 10000, but it's not the case anymore, it's 5594 now, so update
test to be more than 1000 to match the change

Authored-by: Shaoqi Bai <[email protected]>
**Issue:**
Currently `gpexpand` errors out whenever it is run using a user-created input file (not created using the `gpexpand` interview process) on a cluster that has `custom tablespaces` created with the following error -
```
$ cat gpexpand_inputfile_20230914_201220
jnihal3MD6M.vmware.com|jnihal3MD6M.vmware.com|7005|/tmp/demoDataDir3|5|3|p

$ gpexpand -i gpexpand_inputfile_20230914_201220
20230914:20:13:04:066896 gpexpand:jnihal3MD6M:jnihal-[ERROR]:-gpexpand failed: [Errno 2] No such file or directory: 'gpexpand_inputfile_20230914_201220.ts'
```

**RCA:**
This is happening due to the commit 9b70ba8. This commit introduced a change, where it requires `gpexpand` to have a separate tablespace input configuration file (`<input_file>.ts`) whenever there are `custom tablespaces` in the database. However, this file only gets created whenever the user uses the `gpexpand` interview process to create the input file.

In cases where the user manually creates the input file, the tablespace file is missing which causes the above error.

**Fix:**
Add a check in the `read_tablespace_file()` function to assert if the file is present or not. In cases where the file is not present, create the file automatically and exit from the process to give users a chance to review them (if they want to change the `tablespace` location) and prompt them to re-run `gpexpand`.

The call to the `read_tablespace_file()` is also moved before we start the expansion process. This is because we want to exit from the process before we start the expansion so that the user does not have to `rollback` when they re-run `gpexpand`.

```
$ gpexpand -i gpexpand_inputfile_20230914_201220
20230914:20:24:00:014186 gpexpand:jnihal3MD6M:jnihal-[WARNING]:-Could not locate tablespace input configuration file 'gpexpand_inputfile_20230914_201220.ts'. A new tablespace input configuration file is written to 'gpexpand_inputfile_20230914_201220.ts'. Please review the file and re-run with: gpexpand -i gpexpand_inputfile_20230914_201220
20230914:20:24:00:014186 gpexpand:jnihal3MD6M:jnihal-[INFO]:-Exiting...

$ gpexpand -i gpexpand_inputfile_20230914_201220   --> re-run with the same input file
```
* docs - add filepath column to gp_check_orphaned_files

* add a caution
bimboterminator1 and others added 9 commits November 24, 2023 17:13
Flow structure has a segidColIdx field, which referencing a gp_segment_id
column in plan's targetList when the Explicit Redistribute Motion is requested.
There was a problem, that _copyFlow, _equalFlow, _outFlow and _readFlow
functions did not handle the segidColIdx field. Therefore, the Flow node
could not be serialized and deserialized, or copied correctly.

The problem manifested itself when a query had UPDATE/DELETE operation, that
required Explicit Redistribute, inside the SubPlan (or InitPlan). The Explicit
Redistribute did not applied correctly inside the apply_motion_mutator function
because by that moment the value segidColIdx had been lost. The problem occured
because previously SubPlans had been mutated inside the ParallelizeSubplan
function, where the SubPlan's plan had been copied via copyObject function.
This function copies the whole plan including the Flow node, which is copied
via _copyFlow function. However the Flow node copying did not include the
copying of segidColIdx Flow field, which is used for valid performance of
Explicit Redistribute Motion.

Therefore, this patch solves the issue by adding segidColIdx to the list of
fields to copy, serialize, deserialize and compare in the _copyFlow, _outFlow,
_readFlow and _equalFlow functions respectively.

Cherry-picked from: 4a9aac4
The executor enabled the EXEC_FLAG_REWIND flag for all types of SubPlans: either
for InitPlans or correlated/uncorrelated SubPlans. This flag represents that the
rescan is expected and is used during initialization of executor state. However,
if a query had InitPlans, which contained non-rescannable nodes (like Split
Update in the tests), the executor failed with assertion error (for example,
when calling ExecInitSplitUpdate when initializing executor state inside the
ExecInitNode for the InitPlan).

Because InitPlans are essentially executed only once, there is no need to expect
a rescan of the InitPlan. Therefore, in order to support non-rescannable
operations inside the InitPlans, this patch disables EXEC_FLAG_REWIND flag for
InitPlans.

This patch partially returns vanilla postgres logic, which used
plannedstmt->rewindPlanIDs bitmapset for making a decision whether current
SubPlan should be executed with EXEC_REWIND flag. This bitmapset used to be
filled with the ids of such SubPlans, that could optimize the rescan
operation if the EXEC_REWIND is set, like parameterless subplans. Other
types of SubPlans were considered rescannable by default and there were
no need to set the EXEC_REWIND flag for them. However, GPDB interprets the
EXEC_REWIND flag as an indicator that the node is likely to be rescanned, and
also used this flag to delay the eager free. Therefore, this patch proposes
to fill plannedstmt->rewindPlanIDs set with all the subplans ids, except
InitPlans, and to set EXEC_REWIND flag only for those subplans that are
in the rewindPlanIDs bitmapset.

As for legacy optimizer, the if-clause influencing the filling of the
bitmapset is changed inside the build_subplan function in order to filter out
any InitPlans.

As for ORCA optimizer, rewindPlanIDs was not previously used, and this patch
adds a bunch of logic to fill this bitmapset with subplan ids. This
patch extends existing SetInitPlanVariables function and renames it to
SetSubPlanVariables. This function has originally been setting
nInitPlans and nParamExec in PlannedStmt, and also has been setting
qDispSliceId for each InitPlan, that is found during plan tree
traversal. This patch extends this behaviour and additionally fills the
rewindPlanIDs bitmapset for each SubPlan found, execept InitPlans.

At executor side, the condition checking whether the SubPlan is
in the planned_stmt->rewindPlanIDs is added to the InitPlan function.
From that point, SubPlans will be initialized with EXEC_REWIND flag
only if they are not InitPlans.

Ticket: ADBDEV-4059

Cherry-picked from: d0a5bc0
When a query had a modifying command inside the correlated SubPlan, the
ModifyTable node could be rescanned for each outer tuple. That lead to
execution errors (rescan of specific nodes is not supported). This happened
because the ParallelizeCorrelatedSubplanMutator function did not expect the
ModifyTable node inside the correlated SubPlans.

This patch adds the support of the ModifyTable nodes for correlated SubPlans.
Currently, ModifyTable node can get into the SubPlan only as the part of CTE
query, therefore, it can either be wrapped in the SubqueryScan node or be
standalone, depending on the SubqueryScan being trivial or not. This
patch affects ParallelizeCorrelatedSubplanMutator function. The patch extends
the if-clause dedicated to the choice of plan nodes, that need to be
broadcasted or focused, and then materialized. The specific conditions related
to modifying operations were added. These conditions checks whether current node
is the SubqueryScan with ModifyTable just under it or current node is a
standalone ModifyTable. If condition is satisfied, node is then processed the
same way as any other Scan-type nodes. Next, the result of ModifyTable is either
broadcasted or focused depending on the target flow type. Then the result is
materialized in order to avoid rescan of underlying nodes.

Cherry-picked-from: c164546
There are several cases in which planner produces bogus plan
for queries to replicated tables with volatile functions,
that may lead to wrong results or even segfault.

1. Volatile function in subplan

Query with subplan containing volatile functions on distributed
replicated tables may not make gather motion. Currently, gpdb
replaces locuses for subplans subtrees to SingleQE in case of they
have SegmentGeneral (i.e. replicated table scan, data is only on segments)
locus and contains volatile functions. But it's incorrect because
SingleQE locus assume that data is available on any segment instance
including coordinator. As a result, there is no reason to add gather motion
above such subtree and the resulting plan will be invalid.

Solution is to make explicit gather motion in such case.

2. Volatile function in modify subplan targret list

Query on distributed replicated tables may not make broadcast motion.
Usually, insert query uses subquery. Volatile functions in such subqueries
are caught in set_subqueryscan_pathlist which adds gather motion for that.
But some subqueries are simplified on early planning stages and subquery
subtree is substituted into main plan (see is_simple_subquery). In such case
we should explicit catch volatile functions before adding ModifyTable node.
To ensure rows are explicitly forwarded to all segments we should replace
subplan locus (with volatile functions) with SingleQE before request motion
append. It is necessary because planner considers pointless to send rows
from replicated tables.

Solution is set CdbLocusType_SingleQE in such cases, that later make
broadcast motion.

3. Volatile function in deleting motion flow

Query containing volatile functions on distributed replicated tables may
not make broadcast motion. This produces wrong plan. This happens, because
apply_motion_mutator delete pre-existing broadcast motion to recreate it later.
But we should save motion request to create appropriate motion above the
child node. Original flow for the child node will be restored after motion creation.

Solution is to save such flow, that later make broadcast motion.

4. Volatile function in correlated subplan quals

Query on distributed replicated tables may not make broadcast motion.
This produces wrong plan. This happens, because broadcast motion does not
made, when volatile function exists. Planner considers pointless to send
rows from replicated tables. But if volatile function exist in quals we
need broadcast motion.

Solution is set CdbLocusType_SingleQE in such cases, that later make
broadcast motion.

Cherry-picked from: 7ef4218

to append plan changes for 516bd3a

Cherry-picked from: cc35273
Geenplum python scrips try to parse system command’s STDOUT and STDERR
in English and may fail if locale is different from en_US. This patch
adds tests that cover this case

Also, this patch reinstalls glibc-common in docker container. This is
necessary to get langpacks in docker because docker images don't contain
them.

Cherry-picked-from: 6298e77
This should have be done with #16428, but we need to disable autovacuum when
running the gp_check_files regress test. Otherwise we might see errors like:

```
@@ -53,12 +53,8 @@
 -- check orphaned files, note that this forces a checkpoint internally.
 set client_min_messages = ERROR;
 select gp_segment_id, filename from run_orphaned_files_view();
- gp_segment_id | filename
----------------+----------
-             1 | 987654
-             1 | 987654.3
-(2 rows)
-
+ERROR:  failed to retrieve orphaned files after 10 minutes of retries.
+CONTEXT:  PL/pgSQL function run_orphaned_files_view() line 19 at RAISE
 reset client_min_messages;
```

In the log we have:
```
2023-09-20 15:33:00.766420 UTC,"gpadmin","regression",p148081,th-589358976,"[local]",,2023-09-20 15:31:39 UTC,0,con19,cmd65,seg-1,,dx38585,,sx1,"LOG","00000","attempt failed 17 with error: There is a client session running on one or more segment. Aborting...",,,,,"PL/pgSQL function run_orphaned_files_view() line 11 at RAISE","select gp_segment_id, filename from run_orphaned_files_view();",0,,"pl_exec.c",3857,

```

It is possible that some background jobs have created some backends that we
think we should avoid when taking the gp_check_orphaned_files view. As we have
decided to make the view conservative (disallowing any backends that could
cause false positive of the view results), fixing the test is what we need.

In the test we have a safeguard which is to loop 10 minutes and take the view
repeatedly (function run_orphaned_files_view()). But it didn't solve the issue
because it saw only one snapshot of pg_stat_activity in the entire execution of
the function. Now explicitly call pg_stat_clear_snapshot() to solve that issue.

Co-authored-by: Ashwin Agrawal [email protected]
… deadlock"

This commit causes issues on mackines with disabled ipv6 (e.g. in our CI
environment). We should research it more attentively.

This reverts commit 7f3c91f.
@Stolb27 Stolb27 requested a review from a team December 5, 2023 20:21
@BenderArenadata
Copy link

Allure report https://allure-ee.adsw.io/launch/59863

@BenderArenadata
Copy link

Failed job Behave tests on ppc64le: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/857046

@BenderArenadata
Copy link

Failed job Resource group isolation tests on ppc64le: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/857048

@BenderArenadata
Copy link

Failed job Regression tests with Postgres on ppc64le: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/857039

@BenderArenadata
Copy link

Failed job Regression tests with Postgres on x86_64: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/857038

@BenderArenadata
Copy link

Failed job Regression tests with ORCA on x86_64: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/857040

@BenderArenadata
Copy link

Failed job Regression tests with ORCA on ppc64le: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/857041

@BenderArenadata
Copy link

Allure report https://allure-ee.adsw.io/launch/59911

@BenderArenadata
Copy link

Failed job Resource group isolation tests on ppc64le: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/859623

@BenderArenadata
Copy link

Failed job Regression tests with Postgres on ppc64le: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/859615

@BenderArenadata
Copy link

Failed job Regression tests with ORCA on ppc64le: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/859617

@BenderArenadata
Copy link

Failed job Regression tests with Postgres on ppc64le: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/860207

@BenderArenadata
Copy link

Failed job Regression tests with ORCA on ppc64le: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/860206

@BenderArenadata
Copy link

Allure report https://allure-ee.adsw.io/launch/59930

@BenderArenadata
Copy link

Failed job Resource group isolation tests on ppc64le: https://gitlab.adsw.io/arenadata/github_mirroring/gpdb/-/jobs/860907

@Stolb27 Stolb27 merged commit 974ba91 into adb-6.x-dev Dec 7, 2023
3 of 5 checks passed
@Stolb27 Stolb27 deleted the 6.26.0-sync2dev branch December 7, 2023 10:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.