Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use spark noop for speeding testing rather than sum aggregation #3

Open
RobinL opened this issue Dec 5, 2024 · 1 comment
Open

Comments

@RobinL
Copy link
Member

RobinL commented Dec 5, 2024

Image

@RobinL
Copy link
Member Author

RobinL commented Dec 5, 2024

You can do this with



def execute_comparison(db_api, sql):
    if db_api.sql_dialect.sql_dialect_str == "duckdb":
        con = db_api._con
        sql = f"COPY ({sql}) TO '/dev/null';"
        con.execute(sql)
    else:
        spark = db_api.spark
        df = spark.sql(sql)
        df.write.format("noop").mode("overwrite").save()

but i've tried it and in practice it doesn't make a difference to the results

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant