-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance improvement for transposing data #2
Comments
I think the trick will be to loop through stata row-wise. Atm I loop through Stata column-wise. It sounds like the performance loss in Arrow will be smaller than the loss in Stata. |
The Stata methods aren't threadsafe, right? So it has to be single threaded? |
I think it can me multi-threaded but I don't think it actually improved performance when I tested it (though I might have not done it right). I think what might benefit quite a bit from multi-threading is the read/write from/to parquet files, not Stata memory. |
I wouldn't be surprised if that is multi-threaded by default |
At least in Python, it reads Parquet files multi-threaded by default. Sometime soon I'd like to try to go through your code more closely. |
I think that the way to structure this that might be faster:
Then the converse for write:
At the moment, this reads the parquet file in column order and saves to Stata on the fly in column order as well. For writing, it reads the data in memory into a parquet table but, again, it loops through Stata in column order. |
Yes, I agree with all of that. |
I'm doing some benchmarking. Writing 10M rows and 3 variables once the data is in an arrow table takes 0.5s. Looping over Stata as it is atm also takes 0.5s. Writing a that to a .dta file takes 0.2s. |
Even without further speed improvements, this package would be extremely helpful for anybody who uses Stata and {Python,R,Spark} (though R support for Parquet is still kinda limited), because it would mean that Stata could read binary data exported from one of those platforms. |
I wonder if it's not multi-threaded. I would like to cut down processing time in half, ideally. I think that's plausible, but I doubt it can ever be faster than reading/writing .dta files directly (otherwise, I mean, what's the point of |
You're comparing reading the entire .dta file into Stata with the entire .parquet file... That's not necessarily the right comparison. Reading the first column of the first row group in Parquet is extremely fast. Doing use col1 in 1/1000 using file.dta is sometimes extremely slow. I originally was frustrated because when you do use in 1 using file.dta it has to load the entire file just to read the first row of the data! So if there are huge (~500GB) files that can be split into say ~20 row groups, that's something that Parquet could excel at. |
Nicely enough, it takes literally a third of the time (one col vs 3) |
Haha yeah, that's been my experience as well. Generally it's linear in the amount of columns you read. And since a ton of data analysis only cares about a few columns out of a dataset of 300, the columnar file type can really make a difference. |
It sounds like individual columns can be chuncked. I think I can only implement the solution suggested in the apache docs if the number of chunks and each chunk size is the same. I suppose most flat data would be like that, tho. Need to check first and fall back to out of order if each column is not stored in the same way. |
I think that each column can only be chunked inside a row group. So if the first row group is 10,000 rows, then there won't be any chunks smaller than that for the first 10,000 rows. I'm not sure if that sentence makes sense |
I think a row group is only relevant when reading the file from disk, not when iterating over the table already in memory. |
I've been trying this out on the server on modestly large data that I've been using for a project (few GiB) and compression is amazing! Performance for traversing several variables in Stata in column order is pretty poor, though, specially if there are a ton of strings. I won't spend any time optimizing the row vs column-order thing until we figure out how the Java version fares, but it's pretty cool to see a fairly complicated 21GiB file down at 5GiB. |
Yes, the compression is amazing. Using Parquet files with something like Dask or Spark completely opens up doing computation on 20Gb files on a laptop. |
Just bumping this in case you had any great discovery in the last few months. Since you're still on my DUA, can you try this command:
That parquet directory is 2.6GB, but the command has been running for 11 minutes and hasn't finished... It would be really awesome if you had some way of making a progress bar. |
Yup. Had this on the back of my head. Don't think it'd take too long. Format ideas?
? |
Yeah that seems great |
linesize is a problem ): |
why? |
When I try to print the timer, it often gets broken up by timeline, so it looks all wrong. I also can't get the formatting to work right. In my tests stata prints at the end of the program, not as the program is executing. I suspect it's waiting for a new line... |
Weird. Btw did you try this?
|
I'm just going to have it report ever 30 seconds or something like that. |
|
Wow. So, basically, it's Stata that chocked? Mmm... Is there a way to add 381 variables that doesn't take 23 minutes? I think it might be somewhat faster if I specify mata not initialize the variables (or maybe mata performance deteriorates with these many variables? I tested it and it was faster than looping gen...) |
I don't know... This was basically the first real-world file I've tried to read into Stata with this |
Ah! I have it. Maybe it's faster to allocate 1 observation and then set obs to _N. |
Weird |
Great! With 20M obs, allocating 8 variables and then the observations takes 0.5s, vs 2.5s the way |
Great. Let me know when to test |
10 minutes for me. I wonder if the progress code has slowed down the plugin? Although I do have someone else using 8 cores in the server where I'm in... While it does print mid-execution, it appears that Stata doesn't print it right away anyway ): Since it it might be pointless, should I just take it out? |
How long is the delay until it's printed? |
It seems to print every 4 lines or something like that. |
I can push what I have for you to test if you want? |
Sure |
Up. |
When printing size on disk/size in memory, I'd recommend a human-readable number, instead of millions or billions of bytes |
How does this compare time wise to Stata reading a 20GB file? |
Stata's not that slow reading an entire dataset, afaik. Not sure how that compares to native parquet performance, tho (16GiB) . use /disk/aging/medicare/data/100pct/med/2014/med2014.dta, clear
r; t=23.95 15:18:28 |
So it's still 10 minutes for parquet vs 24 seconds for Stata format? |
Took 3 min in python
If took me 10 min, but I it took you 4 min (and the rest was the inefficient Stata memory alloc). I don't think performance is that bad, all things considered, but it could def be better. Basides, the point is to benchmark reading only a subset of all columns, right? |
Pyarrow is also multithreaded I'm pretty sure. But yes, most of the time a user would only be loading a subset of columns. |
There's a tutorial about this on the Arrow C++ documentation:
https://arrow.apache.org/docs/cpp/md_tutorials_row_wise_conversion.html
From Arrow to row-wise is the second half of the document.
The text was updated successfully, but these errors were encountered: