-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-32723: [C++][Parquet] Add option to use LARGE* variants of binary types #35825
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The skeleton is ok here, but I think too much duplicated code is introduced. Would you mind using template to simplify the code?
Sure, I'll first add some tests and then look into this. If you have any suggestions on what code you would like to be templated, pls let me know |
Just added a test that depends on a parquet test file, pr for the file: apache/parquet-testing#38 |
@arthurpassos What kind of api are you just using? Since arrow has dataset, but it can be read using raw |
I am one of the contributors of ClickHouse, a column oriented database. We rely on arrow to read parquet files. Code has changed since I last worked on it, but I can see a combination of the following classes/ methods You can find the full code here: https://github.com/ClickHouse/ClickHouse/blob/master/src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp#L31 |
@mapleFU I see you reacted with a thumbs up. I assume you mean that's the correct API? Unfortunately this runs into the issue this PR tries to address, could you guide me on how to use the lower level API to go around this problem? |
I think you change is reasonable, but I guess it would take sometime to merge it. So instead I guess:
@pitrou @wgtmac @arthurpassos What do you think of this? |
I certainly think that if you hit a 2GB column chunk limit when reading Parquet data, you're probably using a way too large batch size. Is there a use case where it makes sense? |
I don't think we have any low-level API other than |
The way it works now is: parquet format -> arrow format -> clickhouse format. I was thinking we could remove the arrow part by directly converting from parquet format to clickhouse format with the lower level APIs, just not sure it's possible. Hence the question |
If you're not using the Arrow format internally, then sure, it's possible. You can take a look at the |
We are using it, but I am exploring other options |
Regardless of the use case, shouldn't arrow support it simply because parquet supports it? |
I'm not sure arguing about ideals is useful :-) While adding the feature would definitely be reasonable, it's also not important enough that we should accept warts in the API in its name, IMHO.
Out of curiosity, would you care to explain why? The main impediment to |
I meant that |
@pitrou What do you think would be the optimal solution for this issue? |
auto estimatedRowSize = dataSource_->estimatedRowSize();
readBatchSize_ =
estimatedRowSize == connector::DataSource::kUnknownRowSize
? outputBatchRows()
: outputBatchRows(estimatedRowSize); An config to adjust batch size might helps |
I think batch_size is being set here: https://github.com/ClickHouse/ClickHouse/blob/master/src/Processors/Formats/Impl/ParquetBlockInputFormat.cpp#L101. It defaults to 8192. |
Yes ClickHouse use default size 8192, but personally I think it's better to adjust batchSize by storage schema when reading from file. |
@arthurpassos Hi Arthur. Is there a roadmap for merging this PR? I think it might be connected to an issue we're experiencing in our pipeline, which I've documented in #38513. I've created a separate issue because the conditions are a little bit different than in the one that was automatically assigned to this PR (#32723). |
Tbh, there is not a roadmap. If I understand correctly, this patch is unwanted by core maintainers as is and the alternative approach is not even guaranteed to work. |
This, if completed, would fix #39682 |
Recently I've revisit this part of code. Maybe we can have a thought on this since single string wouldn't greater than 2GB, maybe the accumulator could still be StringBuilder/BinaryBuilder, limit by 2GB. And if user uses LargeBinary, |
@mapleFU Could you make a more precise proposal so that we can understand a bit better? What would the API be like, concretely? |
Will create a separate issue for that |
Create an issue for that: #41104 |
Rationale for this change
What changes are included in this PR?
Are these changes tested?
Are there any user-facing changes?