Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] max_statement_mem not enforced #711

Open
2 tasks done
antoniopetrole opened this issue Nov 17, 2024 · 2 comments
Open
2 tasks done

[Bug] max_statement_mem not enforced #711

antoniopetrole opened this issue Nov 17, 2024 · 2 comments
Assignees
Labels
type: Bug Something isn't working

Comments

@antoniopetrole
Copy link
Member

antoniopetrole commented Nov 17, 2024

Cloudberry Database version

I've tested this in 1.5.4, 1.6.0, and the latest main branch all with the same results

What happened

Currently it seems that Cloudberry doesn't actually enforce the max_statement_mem GUC. This seems to be due to a missing return statement in the part of the code that checks to see if the statement_mem > max_statement_mem.

What you think should happen instead

Clearly max_statement_mem should prevent users from setting their own local statement_mem higher than it. This can create all kinds of issues since users can technically overallocate memory and operate outside of the bounds of their workload management.

How to reproduce

You can run this on any fresh install (or where the max_statement_mem is <= 2000MB

CREATE USER testuser;
SET ROLE testuser;
SHOW statement_mem;
SHOW max_statement_mem;
SET statement_mem = '5000MB';
SHOW statement_mem;
EXPLAIN ANALYZE SELECT * FROM gp_segment_configuration;

The output is as follows for me on any of the versions mentioned earlier


gpadmin=# CREATE USER testuser;
NOTICE:  resource queue required -- using default resource queue "pg_default"
CREATE ROLE

SET ROLE testuser;
SET

SHOW statement_mem;
 statement_mem
---------------
 125MB
(1 row)

SHOW max_statement_mem;
 max_statement_mem
-------------------
 2500MB
(1 row)

SET statement_mem = '5000MB';
SET

SHOW statement_mem;
 statement_mem
---------------
 5000MB
(1 row)

EXPLAIN ANALYZE SELECT * FROM gp_segment_configuration;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------
 Seq Scan on gp_segment_configuration  (cost=0.00..1.01 rows=1 width=112) (actual time=0.015..0.017 rows=10 loops=1)
 Planning Time: 13.007 ms
   (slice0)    Executor memory: 109K bytes.
 **Memory used:  5120000kB**
 Optimizer: Postgres query optimizer
 Execution Time: 0.231 ms
(6 rows)

Operating System

rocky 9 linux (should be an OS agnostic bug)

Anything else

I have the change locally staged and tested on my machine (it's just a single "return false;" statement). I can easily write a regression test for this using the .sql and expected output formats I see in the test directory, just point me in the right direction for WHERE to put this test (I don't see an obvious place for it).

Also shoutout Louis Mugnano for helping me track this down

Are you willing to submit PR?

  • Yes, I am willing to submit a PR!

Code of Conduct

@antoniopetrole antoniopetrole added the type: Bug Something isn't working label Nov 17, 2024
Copy link

Hey, @antoniopetrole welcome!🎊 Thanks for taking the time to point this out.🙌

@roseduan
Copy link
Contributor

Thanks for your feedback.

I also do not find a proper regression test to place this.

Maybe add a return false statement is ok.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: Bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants