Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decimal conversion inconsistency #7661

Open
comphead opened this issue Sep 26, 2023 · 3 comments
Open

Decimal conversion inconsistency #7661

comphead opened this issue Sep 26, 2023 · 3 comments
Labels
bug Something isn't working

Comments

@comphead
Copy link
Contributor

comphead commented Sep 26, 2023

Describe the bug

A decimal conversion query is not consistent with PG or Spark

To Reproduce

❯ select cast(1.1 as decimal(2, 2)) + 1;
+-------------------------+
| Float64(1.1) + Int64(1) |
+-------------------------+
| 2.10                    |
+-------------------------+
1 row in set. Query took 0.002 seconds.

the same query returns null in Spark in non ansi mode
numeric field overflow error in PG

Expected behavior

Should be consistent

Additional context

No response

@comphead comphead added the bug Something isn't working label Sep 26, 2023
@comphead
Copy link
Contributor Author

cc @viirya

@viirya
Copy link
Member

viirya commented Sep 27, 2023

This is because cast kernel at upstream doesn't check precision overflow, although it checks casting overflow. I've submitted a change at the upstream for this: apache/arrow-rs#4866

@eejbyfeldt
Copy link
Contributor

On main this now produces

> select cast(1.1 as decimal(2, 2)) + 1;
Arrow error: Invalid argument error: 110 is too large to store in a Decimal128 of precision 2. Max is 9

@comphead should we close this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants