Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make tests Parallel, again #965

Merged
merged 55 commits into from
Jul 31, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
3ff7e0a
add parallel test execution, fix https://github.com/Altinity/clickhou…
Slach Jul 24, 2024
ef5e757
debug https://github.com/Altinity/clickhouse-backup/issues/888
Slach Jul 24, 2024
fd22115
debug https://github.com/Altinity/clickhouse-backup/issues/888, refac…
Slach Jul 26, 2024
fd760d3
debug https://github.com/Altinity/clickhouse-backup/issues/888, refor…
Slach Jul 26, 2024
dd50563
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix C…
Slach Jul 26, 2024
cfceb6c
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix g…
Slach Jul 26, 2024
cd85b12
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix T…
Slach Jul 26, 2024
cb7ee61
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix T…
Slach Jul 26, 2024
2002704
debug https://github.com/Altinity/clickhouse-backup/issues/888, measu…
Slach Jul 26, 2024
57d879a
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix C…
Slach Jul 26, 2024
8e60834
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 26, 2024
603da07
debug https://github.com/Altinity/clickhouse-backup/issues/888, trick…
Slach Jul 26, 2024
5631af6
debug https://github.com/Altinity/clickhouse-backup/issues/888, trick…
Slach Jul 26, 2024
dfb1bc4
debug https://github.com/Altinity/clickhouse-backup/issues/888, impro…
Slach Jul 26, 2024
b79a0fb
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 26, 2024
0644b5e
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 26, 2024
021bca3
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 26, 2024
18c6773
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 26, 2024
2d0b6ba
debug https://github.com/Altinity/clickhouse-backup/issues/888, try s…
Slach Jul 27, 2024
5bfa046
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 27, 2024
6c0db8a
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 27, 2024
0cbc506
debug https://github.com/Altinity/clickhouse-backup/issues/888, impro…
Slach Jul 27, 2024
c022b41
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 27, 2024
454baed
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 27, 2024
c73bc8d
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 28, 2024
16b72ee
debug https://github.com/Altinity/clickhouse-backup/issues/888, repla…
Slach Jul 28, 2024
a81daac
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix p…
Slach Jul 28, 2024
debadcc
debug https://github.com/Altinity/clickhouse-backup/issues/888, incre…
Slach Jul 28, 2024
2644dbc
debug https://github.com/Altinity/clickhouse-backup/issues/888, RUN_P…
Slach Jul 28, 2024
54f5b20
debug https://github.com/Altinity/clickhouse-backup/issues/888, prope…
Slach Jul 28, 2024
6ad2f45
debug https://github.com/Altinity/clickhouse-backup/issues/888, try t…
Slach Jul 28, 2024
1eb37c5
debug https://github.com/Altinity/clickhouse-backup/issues/888, try i…
Slach Jul 28, 2024
27c42a0
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix c…
Slach Jul 28, 2024
0e19b03
debug https://github.com/Altinity/clickhouse-backup/issues/888, RUN_P…
Slach Jul 28, 2024
ff5ecbb
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 28, 2024
d0f9dd8
Merge branch 'master' of github.com:Altinity/clickhouse-backup into p…
Slach Jul 29, 2024
0e085c1
debug https://github.com/Altinity/clickhouse-backup/issues/888, use d…
Slach Jul 29, 2024
27855c0
debug https://github.com/Altinity/clickhouse-backup/issues/888, use d…
Slach Jul 30, 2024
47f00ed
debug https://github.com/Altinity/clickhouse-backup/issues/888, use `…
Slach Jul 30, 2024
ca93e21
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix p…
Slach Jul 30, 2024
cdcadeb
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix T…
Slach Jul 30, 2024
882d921
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix T…
Slach Jul 30, 2024
533040c
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix T…
Slach Jul 30, 2024
cdd84f3
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix T…
Slach Jul 30, 2024
c923d29
debug https://github.com/Altinity/clickhouse-backup/issues/888, fix T…
Slach Jul 30, 2024
400b118
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 30, 2024
0ef2aa0
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 30, 2024
07521a3
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 30, 2024
c9288fd
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 30, 2024
b4cecca
debug https://github.com/Altinity/clickhouse-backup/issues/888, debug…
Slach Jul 30, 2024
34ed671
debug https://github.com/Altinity/clickhouse-backup/issues/888, RUN_P…
Slach Jul 31, 2024
e9eca42
debug https://github.com/Altinity/clickhouse-backup/issues/888, RUN_P…
Slach Jul 31, 2024
b80f0a8
debug https://github.com/Altinity/clickhouse-backup/issues/888, TestI…
Slach Jul 31, 2024
492f3f2
debug https://github.com/Altinity/clickhouse-backup/issues/888, test …
Slach Jul 31, 2024
79dfc7c
debug https://github.com/Altinity/clickhouse-backup/issues/888, retur…
Slach Jul 31, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 25 additions & 8 deletions .github/workflows/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ on:

push:
branches:
- master
- "*"

jobs:
build:
Expand Down Expand Up @@ -269,11 +269,13 @@ jobs:

- name: Running integration tests
env:
RUN_PARALLEL: 2
GOROOT: ${{ env.GOROOT_1_22_X64 }}
CLICKHOUSE_VERSION: ${{ matrix.clickhouse }}
# options for advanced debug CI/CD
# RUN_TESTS: "TestIntegrationS3"
# RUN_TESTS: "TestLongListRemote"
# LOG_LEVEL: "debug"
# TEST_LOG_LEVEL: "debug"
# GCS_DEBUG: "true"
# SFTP_DEBUG: "true"
# AZBLOB_DEBUG: "true"
Expand All @@ -293,7 +295,7 @@ jobs:
QA_GCS_OVER_S3_SECRET_KEY: ${{ secrets.QA_GCS_OVER_S3_SECRET_KEY }}
QA_GCS_OVER_S3_BUCKET: ${{ secrets.QA_GCS_OVER_S3_BUCKET }}
run: |
set -x
set -xe
echo "CLICKHOUSE_VERSION=${CLICKHOUSE_VERSION}"
echo "GCS_TESTS=${GCS_TESTS}"

Expand All @@ -311,12 +313,27 @@ jobs:
export COMPOSE_FILE=docker-compose.yml
fi

command -v docker-compose || (apt-get update && apt-get install -y python3-pip && pip3 install -U docker-compose)

export CUR_DIR="$(pwd)/test/integration"
export CLICKHOUSE_BACKUP_BIN="$(pwd)/clickhouse-backup/clickhouse-backup-race"
docker-compose -f test/integration/${COMPOSE_FILE} up -d || ( docker-compose -f test/integration/${COMPOSE_FILE} ps -a && docker-compose -f test/integration/${COMPOSE_FILE} logs clickhouse && exit 1 )
docker-compose -f test/integration/${COMPOSE_FILE} ps -a
go test -timeout 60m -failfast -tags=integration -run "${RUN_TESTS:-.+}" -v test/integration/integration_test.go
docker compose -f "${CUR_DIR}/${COMPOSE_FILE}" --progress=quiet pull

pids=()
for ((i = 0; i < RUN_PARALLEL; i++)); do
docker compose -f ${CUR_DIR}/${COMPOSE_FILE} --project-name project${i} --progress plain up -d &
pids+=($!)
done


for pid in "${pids[@]}"; do
if wait "$pid"; then
echo "$pid docker compose up successful"
else
echo "$pid the docker compose up failed. Exiting."
exit 1 # Exit with an error code if any command fails
fi
done

go test -parallel ${RUN_PARALLEL} -timeout 60m -failfast -tags=integration -run "${RUN_TESTS:-.+}" -v test/integration/integration_test.go
- name: Format integration coverage
env:
GOROOT: ${{ env.GOROOT_1_22_X64 }}
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ RUN rm -fv /etc/apt/sources.list.d/clickhouse.list && \
find /etc/apt/ -type f -name *.list -exec sed -i 's/ru.archive.ubuntu.com/archive.ubuntu.com/g' {} + && \
( apt-get update || true ) && \
apt-get install -y --no-install-recommends gnupg ca-certificates wget && update-ca-certificates && \
for srv in "keyserver.ubuntu.com" "pool.sks-keyservers.net" "keys.gnupg.net"; do apt-key adv --keyserver $srv --recv-keys 52B59B1571A79DBC054901C0F6BC817356A3D45E; if [ $? -eq 0 ]; then break; fi; done && \
for srv in "keyserver.ubuntu.com" "pool.sks-keyservers.net" "keys.gnupg.net"; do host $srv; apt-key adv --keyserver $srv --recv-keys 52B59B1571A79DBC054901C0F6BC817356A3D45E; if [ $? -eq 0 ]; then break; fi; done && \
DISTRIB_CODENAME=$(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d "=" -f 2) && \
echo ${DISTRIB_CODENAME} && \
echo "deb https://ppa.launchpadcontent.net/longsleep/golang-backports/ubuntu ${DISTRIB_CODENAME} main" > /etc/apt/sources.list.d/golang.list && \
Expand Down
17 changes: 17 additions & 0 deletions pkg/backup/delete.go
Original file line number Diff line number Diff line change
Expand Up @@ -441,3 +441,20 @@ func (b *Backuper) CleanRemoteBroken(commandId int) error {
}
return nil
}

func (b *Backuper) cleanPartialRequiredBackup(ctx context.Context, disks []clickhouse.Disk, currentBackupName string) error {
if localBackups, _, err := b.GetLocalBackups(ctx, disks); err == nil {
for _, localBackup := range localBackups {
if localBackup.BackupName != currentBackupName && localBackup.DataSize+localBackup.CompressedSize+localBackup.MetadataSize+localBackup.RBACSize == 0 {
if err = b.RemoveBackupLocal(ctx, localBackup.BackupName, disks); err != nil {
return fmt.Errorf("CleanPartialRequiredBackups %s -> RemoveBackupLocal cleaning error: %v", localBackup.BackupName, err)
} else {
b.log.Infof("CleanPartialRequiredBackups %s deleted", localBackup.BackupName)
}
}
}
} else {
return fmt.Errorf("CleanPartialRequiredBackups -> GetLocalBackups cleaning error: %v", err)
}
return nil
}
22 changes: 6 additions & 16 deletions pkg/backup/download.go
Original file line number Diff line number Diff line change
Expand Up @@ -270,26 +270,16 @@ func (b *Backuper) Download(backupName string, tablePattern string, partitions [

//clean partially downloaded requiredBackup
if remoteBackup.RequiredBackup != "" {
if localBackups, _, err = b.GetLocalBackups(ctx, disks); err == nil {
for _, localBackup := range localBackups {
if localBackup.BackupName != remoteBackup.BackupName && localBackup.DataSize+localBackup.CompressedSize+localBackup.MetadataSize+localBackup.RBACSize == 0 {
if err = b.RemoveBackupLocal(ctx, localBackup.BackupName, disks); err != nil {
return fmt.Errorf("downloadWithDiff -> RemoveBackupLocal cleaning error: %v", err)
} else {
b.log.Infof("partial required backup %s deleted", localBackup.BackupName)
}
}
}
} else {
return fmt.Errorf("downloadWithDiff -> GetLocalBackups cleaning error: %v", err)
if err = b.cleanPartialRequiredBackup(ctx, disks, remoteBackup.BackupName); err != nil {
return err
}
}

log.WithFields(apexLog.Fields{
"duration": utils.HumanizeDuration(time.Since(startDownload)),
"download_size": utils.FormatBytes(dataSize + metadataSize + rbacSize + configSize),
"object_disk_size": utils.FormatBytes(backupMetadata.ObjectDiskSize),
"version": backupVersion,
"duration": utils.HumanizeDuration(time.Since(startDownload)),
"download_size": utils.FormatBytes(dataSize + metadataSize + rbacSize + configSize),
"object_disk_size": utils.FormatBytes(backupMetadata.ObjectDiskSize),
"version": backupVersion,
}).Info("done")
return nil
}
Expand Down
8 changes: 8 additions & 0 deletions pkg/backup/restore.go
Original file line number Diff line number Diff line change
Expand Up @@ -223,6 +223,14 @@ func (b *Backuper) Restore(backupName, tablePattern string, databaseMapping, tab
}
}
}

//clean partially downloaded requiredBackup
if backupMetadata.RequiredBackup != "" {
if err = b.cleanPartialRequiredBackup(ctx, disks, backupMetadata.BackupName); err != nil {
return err
}
}

log.WithFields(apexLog.Fields{
"duration": utils.HumanizeDuration(time.Since(startRestore)),
"version": backupVersion,
Expand Down
5 changes: 3 additions & 2 deletions pkg/clickhouse/clickhouse.go
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ func (ch *ClickHouse) Connect() error {
},
MaxOpenConns: ch.Config.MaxConnections,
ConnMaxLifetime: 0, // don't change it, it related to SYSTEM SHUTDOWN behavior for properly rebuild RBAC lists on 20.4-22.3
MaxIdleConns: 1,
MaxIdleConns: 0,
DialTimeout: timeout,
ReadTimeout: timeout,
}
Expand Down Expand Up @@ -802,7 +802,7 @@ func (ch *ClickHouse) AttachTable(ctx context.Context, table metadata.TableMetad
if ch.version <= 21003000 {
return fmt.Errorf("your clickhouse-server version doesn't support SYSTEM RESTORE REPLICA statement, use `restore_as_attach: false` in config")
}
query := fmt.Sprintf("DETACH TABLE `%s`.`%s`", table.Database, table.Table)
query := fmt.Sprintf("DETACH TABLE `%s`.`%s` SYNC", table.Database, table.Table)
if err := ch.Query(query); err != nil {
return err
}
Expand Down Expand Up @@ -1157,6 +1157,7 @@ func (ch *ClickHouse) CalculateMaxFileSize(ctx context.Context, cfg *config.Conf
if !cfg.General.UploadByPart {
maxSizeQuery = "SELECT toInt64(max(data_by_disk) * 1.02) AS max_file_size FROM (SELECT disk_name, max(toInt64(bytes_on_disk)) data_by_disk FROM system.parts GROUP BY disk_name)"
}
maxSizeQuery += " SETTINGS empty_result_for_aggregation_by_empty_set=0"
if err := ch.SelectSingleRow(ctx, &rows, maxSizeQuery); err != nil {
return 0, fmt.Errorf("can't calculate max(bytes_on_disk): %v", err)
}
Expand Down
8 changes: 6 additions & 2 deletions pkg/storage/general.go
Original file line number Diff line number Diff line change
Expand Up @@ -160,6 +160,7 @@ func (bd *BackupDestination) BackupList(ctx context.Context, parseMetadata bool,
if err != nil {
return nil, err
}
cacheMiss := false
err = bd.Walk(ctx, "/", false, func(ctx context.Context, o RemoteFile) error {
backupName := strings.Trim(o.Name(), "/")
if !parseMetadata || (parseMetadataOnly != "" && parseMetadataOnly != backupName) {
Expand Down Expand Up @@ -231,6 +232,7 @@ func (bd *BackupDestination) BackupList(ctx context.Context, parseMetadata bool,
}
goodBackup := Backup{m, "", mf.LastModified()}
listCache[backupName] = goodBackup
cacheMiss = true
result = append(result, goodBackup)
return nil
})
Expand All @@ -244,8 +246,10 @@ func (bd *BackupDestination) BackupList(ctx context.Context, parseMetadata bool,
sort.SliceStable(result, func(i, j int) bool {
return result[i].UploadDate.Before(result[j].UploadDate)
})
if err = bd.saveMetadataCache(ctx, listCache, result); err != nil {
return nil, fmt.Errorf("bd.saveMetadataCache return error: %v", err)
if cacheMiss || len(result) < len(listCache) {
if err = bd.saveMetadataCache(ctx, listCache, result); err != nil {
return nil, fmt.Errorf("bd.saveMetadataCache return error: %v", err)
}
}
return result, nil
}
Expand Down
4 changes: 2 additions & 2 deletions pkg/utils/utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -61,13 +61,13 @@ func HumanizeDuration(d time.Duration) string {

func ExecCmd(ctx context.Context, timeout time.Duration, cmd string, args ...string) error {
out, err := ExecCmdOut(ctx, timeout, cmd, args...)
log.Info(out)
log.Debug(out)
return err
}

func ExecCmdOut(ctx context.Context, timeout time.Duration, cmd string, args ...string) (string, error) {
ctx, cancel := context.WithTimeout(ctx, timeout)
log.Infof("%s %s", cmd, strings.Join(args, " "))
log.Debugf("%s %s", cmd, strings.Join(args, " "))
out, err := exec.CommandContext(ctx, cmd, args...).CombinedOutput()
cancel()
return string(out), err
Expand Down
File renamed without changes.
1 change: 1 addition & 0 deletions test/integration/config-azblob.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ clickhouse:
host: clickhouse
port: 9000
restart_command: bash -c 'echo "FAKE RESTART"'
timeout: 60s
azblob:
account_name: devstoreaccount1
account_key: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-custom-kopia.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ clickhouse:
username: backup
password: meow=& 123?*%# МЯУ
sync_replicated_tables: true
timeout: 5s
restart_command: "sql:SYSTEM RELOAD USERS; sql:SYSTEM RELOAD CONFIG; sql:SYSTEM SHUTDOWN"
timeout: 60s
custom:
# all `kopia` uploads are incremental we don't need {{ .diffFromRemote }}
upload_command: /custom/kopia/upload.sh {{ .backupName }}
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-custom-restic.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ clickhouse:
username: backup
password: meow=& 123?*%# МЯУ
sync_replicated_tables: true
timeout: 5s
restart_command: "sql:SYSTEM RELOAD USERS; sql:SYSTEM RELOAD CONFIG; sql:SYSTEM SHUTDOWN"
timeout: 60s
custom:
upload_command: /custom/restic/upload.sh {{ .backupName }} {{ .diffFromRemote }}
download_command: /custom/restic/download.sh {{ .backupName }}
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-custom-rsync.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ clickhouse:
username: backup
password: meow=& 123?*%# МЯУ
sync_replicated_tables: true
timeout: 5s
restart_command: "sql:SYSTEM RELOAD USERS; sql:SYSTEM RELOAD CONFIG; sql:SYSTEM SHUTDOWN"
timeout: 60s
custom:
upload_command: /custom/rsync/upload.sh {{ .backupName }} {{ .diffFromRemote }}
download_command: /custom/rsync/download.sh {{ .backupName }}
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-database-mapping.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ clickhouse:
secure: true
skip_verify: true
sync_replicated_tables: true
timeout: 1s
restart_command: bash -c 'echo "FAKE RESTART"'
timeout: 60s
s3:
access_key: access_key
secret_key: it_is_my_super_secret_key
Expand Down
1 change: 1 addition & 0 deletions test/integration/config-ftp-old.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ clickhouse:
secure: true
skip_verify: true
restart_command: bash -c 'echo "FAKE RESTART"'
timeout: 60s
ftp:
address: "ftp:21"
username: "test_backup"
Expand Down
1 change: 1 addition & 0 deletions test/integration/config-ftp.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ clickhouse:
secure: true
skip_verify: true
restart_command: bash -c 'echo "FAKE RESTART"'
timeout: 60s
ftp:
address: "ftp:21"
username: "test_backup"
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-gcs-custom-endpoint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@ clickhouse:
secure: true
skip_verify: true
sync_replicated_tables: true
timeout: 5s
restart_command: "sql:SYSTEM RELOAD USERS; sql:SYSTEM RELOAD CONFIG; exec:ls -la /var/lib/clickhouse/access; sql:SYSTEM SHUTDOWN"
# restart_command: bash -c 'echo "FAKE RESTART"'
backup_mutations: true
timeout: 60s
gcs:
bucket: altinity-qa-test
path: backup/{cluster}/{shard}
Expand Down
1 change: 1 addition & 0 deletions test/integration/config-gcs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ clickhouse:
host: clickhouse
port: 9000
restart_command: bash -c 'echo "FAKE RESTART"'
timeout: 60s
gcs:
bucket: altinity-qa-test
path: backup/{cluster}/{shard}
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-s3-fips.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ clickhouse:
secure: true
skip_verify: true
sync_replicated_tables: true
timeout: 2s
restart_command: bash -c 'echo "FAKE RESTART"'
backup_mutations: true
timeout: 60s
# secrets for `FISP` will provide from `.env` or from GitHub actions secrets
s3:
access_key: ${QA_AWS_ACCESS_KEY}
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-s3-nodelete.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ clickhouse:
secure: true
skip_verify: true
sync_replicated_tables: true
timeout: 1s
restart_command: bash -c 'echo "FAKE RESTART"'
timeout: 60s
s3:
access_key: nodelete
secret_key: nodelete_password
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-s3.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@ clickhouse:
secure: true
skip_verify: true
sync_replicated_tables: true
timeout: 5s
restart_command: "sql:SYSTEM RELOAD USERS; sql:SYSTEM RELOAD CONFIG; exec:ls -la /var/lib/clickhouse/access; sql:SYSTEM SHUTDOWN"
# restart_command: bash -c 'echo "FAKE RESTART"'
backup_mutations: true
timeout: 60s
s3:
access_key: access_key
secret_key: it_is_my_super_secret_key
Expand Down
1 change: 1 addition & 0 deletions test/integration/config-sftp-auth-key.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ clickhouse:
secure: true
skip_verify: true
restart_command: bash -c 'echo "FAKE RESTART"'
timeout: 60s
sftp:
address: "sshd"
username: "root"
Expand Down
1 change: 1 addition & 0 deletions test/integration/config-sftp-auth-password.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ clickhouse:
secure: true
skip_verify: true
restart_command: bash -c 'echo "FAKE RESTART"'
timeout: 60s
sftp:
address: "sshd"
username: "root"
Expand Down
Loading