Skip to content

Commit

Permalink
Merge pull request #11 from arenadata/2.1.0-sync
Browse files Browse the repository at this point in the history
2.1.0 sync
  • Loading branch information
vitaliy-popov authored Jan 19, 2023
2 parents 7cb464b + 138f95a commit 2e164f5
Show file tree
Hide file tree
Showing 82 changed files with 5,189 additions and 991 deletions.
8 changes: 6 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,8 @@ list(
enforcement.c
gp_activetable.c
quotamodel.c
relation_cache.c)
relation_cache.c
monitored_db.c)

list(
APPEND
Expand All @@ -82,7 +83,10 @@ list(
diskquota--1.0--2.0.sql
diskquota--1.0.3--2.0.sql
diskquota--2.0.sql
diskquota--2.0--1.0.sql)
diskquota--2.0--1.0.sql
diskquota--2.1.sql
diskquota--2.0--2.1.sql
diskquota--2.1--2.0.sql)

add_library(diskquota MODULE ${diskquota_SRC})

Expand Down
125 changes: 125 additions & 0 deletions SECURITY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
# Security Release Process

Greenplum Database has adopted this security disclosure and response policy to
ensure we responsibly handle critical issues.

## Reporting a Vulnerability - Private Disclosure Process

Security is of the highest importance and all security vulnerabilities or
suspected security vulnerabilities should be reported to Greenplum Database
privately, to minimize attacks against current users of Greenplum Database
before they are fixed. Vulnerabilities will be investigated and patched on the
next patch (or minor) release as soon as possible. This information could be
kept entirely internal to the project.

If you know of a publicly disclosed security vulnerability for Greenplum
Database, please **IMMEDIATELY** contact the Greenplum Database project team
([email protected]).

**IMPORTANT: Do not file public issues on GitHub for security vulnerabilities!**

To report a vulnerability or a security-related issue, please contact the email
address with the details of the vulnerability. The email will be fielded by the
Greenplum Database project team. Emails will be addressed promptly, including a
detailed plan to investigate the issue and any potential workarounds to perform
in the meantime. Do not report non-security-impacting bugs through this
channel. Use [GitHub issues](https://github.com/greenplum-db/gpdb/issues)
instead.

## Proposed Email Content

Provide a descriptive subject line and in the body of the email include the
following information:

* Basic identity information, such as your name and your affiliation or company.
* Detailed steps to reproduce the vulnerability (POC scripts, screenshots, and
logs are all helpful to us).
* Description of the effects of the vulnerability on Greenplum Database and the
related hardware and software configurations, so that the Greenplum Database
project team can reproduce it.
* How the vulnerability affects Greenplum Database usage and an estimation of
the attack surface, if there is one.
* List other projects or dependencies that were used in conjunction with
Greenplum Database to produce the vulnerability.

## When to report a vulnerability

* When you think Greenplum Database has a potential security vulnerability.
* When you suspect a potential vulnerability but you are unsure that it impacts
Greenplum Database.
* When you know of or suspect a potential vulnerability on another project that
is used by Greenplum Database.

## Patch, Release, and Disclosure

The Greenplum Database project team will respond to vulnerability reports as
follows:

1. The Greenplum project team will investigate the vulnerability and determine
its effects and criticality.
2. If the issue is not deemed to be a vulnerability, the Greenplum project team
will follow up with a detailed reason for rejection.
3. The Greenplum project team will initiate a conversation with the reporter
promptly.
4. If a vulnerability is acknowledged and the timeline for a fix is determined,
the Greenplum project team will work on a plan to communicate with the
appropriate community, including identifying mitigating steps that affected
users can take to protect themselves until the fix is rolled out.
5. The Greenplum project team will also create a
[CVSS](https://www.first.org/cvss/specification-document) using the [CVSS
Calculator](https://www.first.org/cvss/calculator/3.0). The Greenplum project
team makes the final call on the calculated CVSS; it is better to move quickly
than making the CVSS perfect. Issues may also be reported to
[Mitre](https://cve.mitre.org/) using this [scoring
calculator](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator). The CVE will
initially be set to private.
6. The Greenplum project team will work on fixing the vulnerability and perform
internal testing before preparing to roll out the fix.
7. A public disclosure date is negotiated by the Greenplum Database project
team, and the bug submitter. We prefer to fully disclose the bug as soon as
possible once a user mitigation or patch is available. It is reasonable to
delay disclosure when the bug or the fix is not yet fully understood, or the
solution is not well-tested. The timeframe for disclosure is from immediate
(especially if it’s already publicly known) to a few weeks. The Greenplum
Database project team holds the final say when setting a public disclosure
date.
8. Once the fix is confirmed, the Greenplum project team will patch the
vulnerability in the next patch or minor release, and backport a patch release
into earlier supported releases as necessary. Upon release of the patched
version of Greenplum Database, we will follow the **Public Disclosure
Process**.

## Public Disclosure Process

The Greenplum project team publishes a [public
advisory](https://github.com/greenplum-db/gpdb/security/advisories?state=published)
to the Greenplum Database community via GitHub. In most cases, additional
communication via Slack, Twitter, mailing lists, blog and other channels will
assist in educating Greenplum Database users and rolling out the patched
release to affected users.

The Greenplum project team will also publish any mitigating steps users can
take until the fix can be applied to their Greenplum Database instances.

## Mailing lists

* Use [email protected] to report security concerns to the Greenplum
Database project team, who uses the list to privately discuss security issues
and fixes prior to disclosure.

## Confidentiality, integrity and availability

We consider vulnerabilities leading to the compromise of data confidentiality,
elevation of privilege, or integrity to be our highest priority concerns.
Availability, in particular in areas relating to DoS and resource exhaustion,
is also a serious security concern. The Greenplum Database project team takes
all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities
seriously and will investigate them in an urgent and expeditious manner.

Note that we do not currently consider the default settings for Greenplum
Database to be secure-by-default. It is necessary for operators to explicitly
configure settings, role based access control, and other resource related
features in Greenplum Database to provide a hardened Greenplum Database
environment. We will not act on any security disclosure that relates to a lack
of safe defaults. Over time, we will work towards improved safe-by-default
configuration, taking into account backwards compatibility.
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
2.0.1
2.1.0
23 changes: 12 additions & 11 deletions concourse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,12 @@

### PR Pipeline

https://extensions.ci.gpdb.pivotal.io/teams/main/pipelines/pr.diskquota
https://dev2.ci.gpdb.pivotal.io/teams/gp-extensions/pipelines/pr.diskquota

### Main Branch Pipeline

The development happens on the `gpdb` branch. The merge pipeline for the `gpdb` branch is
https://extensions.ci.gpdb.pivotal.io/teams/main/pipelines/merge.diskquota:gpdb
https://dev2.ci.gpdb.pivotal.io/teams/gp-extensions/pipelines/merge.diskquota:gpdb


# Fly a pipeline
Expand All @@ -25,24 +25,25 @@ https://extensions.ci.gpdb.pivotal.io/teams/main/pipelines/merge.diskquota:gpdb

- Install [ytt](https://carvel.dev/ytt/). It's written in go. So just download the executable for your platform from the [release page](https://github.com/vmware-tanzu/carvel-ytt/releases).
- Make the `fly` command in the `PATH` or export its location to `FLY` env.
- Clone the `gp-continuous-integration` repo to `$HOME/workspace` or set its parent directory to `WORKSPACE` env.
- Login with the `fly` command. Assume we are using `extension` as the target name.
- Login with the `fly` command. Assume we are using `dev2` as the target name.

```
# -n gp-extensions is to set the concourse team
fly -t dev2 login -c https://dev2.ci.gpdb.pivotal.io -n gp-extensions
```

```
fly -t extension login -c https://extensions.ci.gpdb.pivotal.io
```
- `cd` to the `concourse` directory.

## Fly the PR pipeline

```
./fly.sh -t extension -c pr
./fly.sh -t dev2 -c pr
```

## Fly the merge pipeline

```
./fly.sh -t extension -c merge
./fly.sh -t dev2 -c merge
```

## Fly the release pipeline
Expand All @@ -67,7 +68,7 @@ To fly a release pipeline from a specific branch:
## Fly the dev pipeline

```
./fly.sh -t extension -c dev -p <your_postfix> -b <your_branch>
./fly.sh -t dev2 -c dev -p <your_postfix> -b <your_branch>
```

## Webhook
Expand All @@ -84,6 +85,6 @@ curl --data-raw "foo" <hook_url>

## PR pipeline is not triggered.

The PR pipeline relies on the webhook to detect the new PR. However, due to the the limitation of the webhook implemention of concourse, we rely on the push hook for this. It means if the PR is from a forked repo, the PR pipeline won't be triggered immediately. To manually trigger the pipeline, go to https://extensions.ci.gpdb.pivotal.io/teams/main/pipelines/pr.diskquota/resources/diskquota_pr and click ⟳ button there.
The PR pipeline relies on the webhook to detect the new PR. However, due to the the limitation of the webhook implemention of concourse, we rely on the push hook for this. It means if the PR is from a forked repo, the PR pipeline won't be triggered immediately. To manually trigger the pipeline, go to https://dev2.ci.gpdb.pivotal.io/teams/gp-extensions/pipelines/pr.diskquota/resources/diskquota_pr and click ⟳ button there.

TIPS: Just don't fork, name your branch as `<your_id>/<branch_name>` and push it here to create PR.
25 changes: 24 additions & 1 deletion concourse/fly.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ fly=${FLY:-"fly"}
echo "'fly' command: ${fly}"
echo ""
proj_name="diskquota"
concourse_team="main"

usage() {
if [ -n "$1" ]; then
Expand All @@ -19,6 +20,26 @@ usage() {
exit 1
}

# Hacky way to find out which concourse team is being used.
# The team name is needed to generate webhook URL
detect_concourse_team() {
local target="$1"
local fly_rc_file="$HOME/.flyrc"
local found_target=false
while read -r line;
do
line="$(echo -e "${line}" | tr -d '[:space:]')"
if [ ${found_target} != true ] && [ "${line}" = "${target}:" ]; then
found_target=true
fi
if [ ${found_target} = true ] && [[ "${line}" == team:* ]]; then
concourse_team=$(echo "${line}" | cut --delimiter=":" --fields=2)
echo "Use concourse target: ${target}, team: ${concourse_team}"
return
fi
done < "${fly_rc_file}"
}

# Parse command line options
while getopts ":c:t:p:b:T" o; do
case "${o}" in
Expand Down Expand Up @@ -52,6 +73,8 @@ if [ -z "${target}" ] || [ -z "${pipeline_config}" ]; then
usage ""
fi

detect_concourse_team "${target}"

pipeline_type=""
# Decide ytt options to generate pipeline
case ${pipeline_config} in
Expand Down Expand Up @@ -139,6 +162,6 @@ concourse_url=$(fly targets | awk "{if (\$1 == \"${target}\") {print \$2}}")
echo ""
echo "================================================================================"
echo "Remeber to set the the webhook URL on GitHub:"
echo "${concourse_url}/api/v1/teams/main/pipelines/${pipeline_name}/resources/${hook_res}/check/webhook?webhook_token=<hook_token>"
echo "${concourse_url}/api/v1/teams/${concourse_team}/pipelines/${pipeline_name}/resources/${hook_res}/check/webhook?webhook_token=<hook_token>"
echo "You may need to change the base URL if a differnt concourse server is used."
echo "================================================================================"
4 changes: 2 additions & 2 deletions concourse/pipeline/dev.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#@ load("job_def.lib.yml",
#@ "entrance_check_job",
#@ "entrance_job",
#@ "build_test_job",
#@ "centos6_gpdb6_conf",
#@ "centos7_gpdb6_conf",
Expand All @@ -24,7 +24,7 @@ jobs:
#@ "res_map": res_map,
#@ "trigger": trigger,
#@ }
- #@ entrance_check_job(param)
- #@ entrance_job(param)
#@ for conf in confs:
#@ param = {
#@ "res_map": res_map,
Expand Down
26 changes: 0 additions & 26 deletions concourse/pipeline/job_def.lib.yml
Original file line number Diff line number Diff line change
Expand Up @@ -74,32 +74,6 @@ plan:
#@ end
#@ end

#! Like the entrance_job, with more static checks.
#@ def entrance_check_job(param):
#@ add_res_by_name(param["res_map"], "clang-format-image")
#@ trigger = param["trigger"]
name: entrance
on_failure: #@ trigger["on_failure"]
on_error: #@ trigger["on_error"]
plan:
#@ for to_get in trigger["to_get"]:
- trigger: #@ trigger["auto_trigger"]
_: #@ template.replace(to_get)
#@ end
#@ for to_put in trigger["to_put_pre"]:
- #@ to_put
#@ end
- get: clang-format-image
- task: check_clang_format
image: clang-format-image
config:
inputs:
- name: diskquota_src
platform: linux
run:
path: diskquota_src/concourse/scripts/check-clang-format.sh
#@ end

#@ def exit_job(param):
#@ trigger = param["trigger"]
#@ confs = param["confs"]
Expand Down
4 changes: 2 additions & 2 deletions concourse/pipeline/pr.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#@ load("job_def.lib.yml",
#@ "entrance_check_job",
#@ "entrance_job",
#@ "exit_pr_job",
#@ "build_test_job",
#@ "centos6_gpdb6_conf",
Expand Down Expand Up @@ -27,7 +27,7 @@ jobs:
#@ "res_map": res_map,
#@ "trigger": trigger,
#@ }
- #@ entrance_check_job(param)
- #@ entrance_job(param)
#@ for conf in confs:
#@ param = {
#@ "res_map": res_map,
Expand Down
Loading

0 comments on commit 2e164f5

Please sign in to comment.