-
Notifications
You must be signed in to change notification settings - Fork 908
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Versioning]: Explore Kedro + Iceberg for versioning #4241
Comments
This is slightly unscientific, but I trust the vibes in the industry enough to say Iceberg will clearly be the winner in long term. Plus people saying things like this: In my opinion this is a situation where we should really go all in on the technology rather than be super agnostic / on-size-fits-all. I'd love a future for Kedro where without much configuration persisted data defaults to this model. |
@datajoely I actually took a stab on this a while ago. My experience with it is Delta has a more mature support than Iceberg at the moment in the Python ecosystem. for example the integration of ibis with iceberg is suboptimal. So from there I think Delta is gonna have a better performance with anything database related, AFAIK with iceberg it always load things in memory first. One thing to note that these "versioning" are not as effective as we want. For example, an incremental change of adding 1 row will result in a complete rewrite in current Kedro dataset with Delta as well. For high-level versioning, they works very well with dataframe/table format. The main challenge here I see is how to unify the "versioning" in Kedro, Kedro use a customisable timestamp, while Delta use a incremental version number (0, 1, xxx) or timestamp. Iceberg probably user something similar but I haven't checked. |
Delta is 100% more mature, Iceberg is the horse to back. This is the thread I was trying to find earlier: https://x.com/sean_lynch/status/1845500735842390276 I also don't think we should be wedded to that timestamp decision. It was made a long time ago and also has a non-trivial risk of collision. If we were doing that again we'd be better off using a ULID... |
^ To be more specific, I was referring mainly to the python binding, i.e. PyIceberg and rust-delta(python). Iceberg itself is fairly mature, especially with the catalog etc, but the python binding seems to be lacking behind a little bit. |
Any chance I can take this ticket or work together on this? I have explored this a little bit a while ago and would be a great opportunities to continue on it. |
I agree with @datajoely is the horse to back, at least from an API perspective. PyIceberg is maturing (it has moved significantly in the past couple years). Realistically, I don't think Kedro should dictate whether you use Iceberg or Delta (or Hudi); that is a user choice, just like whether to use Spark or Polars. This is where unified APIs will ideally make implementation easier. |
So I'm actually being bullish and saying we should pick one of these when it comes to our idea of versioned data. We simply don't have capacity to integrate everywhere properly. |
Super cool application of these concepts |
I'm with @deepyaman on this one. There should be a layer in Kedro that is format-agnostic. We can be more opinionated in a higher layer. |
What's clear though is that the Apache Iceberg’s REST catalog API has won for sure kedro-org/kedro-devrel#141 (comment) |
I just want to warn against the noble pursuit of generalisation when there are times to pick a winner, I'd much rather pick a horse and do it well. |
@ElenaKhaustova I have left some questions at the end since it's not a PR yet. https://noklam.github.io/blog/posts/pyiceberg/2024-11-18-PyIcebergDataset.html # Questions
- What does it means when we said " if we can use Iceberg to map a single version number to code, parameters, and I/O data within Kedro and how it aligns with Kedro’s workflow." Versioning code & parameters sounds more like versioning artifacts.
- How to efficiently version data? `overwrite` is a completely re-write. For SQL engine this is implemented by the engine that utilise API like `append`, `replace`. With pandas/polars it is unclear if it's possible. (Maybe be possible if it's using something like `ibis`)
- Incremental pipeline (and incremental data)
- Version non-table type, i.e. parameters, code(?), Iceberg support only these three formats out of the box: Apache Parquet, Apache ORC, and Apache Avro. Parquet is the first-class citizen and the only format that people use in practice. |
From the versioning research (https://miro.com/app/board/uXjVK9U8mVo=/?share_link_id=24726044039) pain points and summary, we concluded that users mention two major problems—versioning and experiment tracking. At first, we decided to focus on versioning. With it, the main user pain point was not to version a specific artefact as current kedro versioning allows so (not in an optimal way though) but to be able to retrieve a whole experiment/run. Meaning being able to travel back in time with your code and data and checkout to a specific version for the whole kedro project not just for the individual artifact. Please see Kedro + DVC example for better understanding: #4239 (comment) It's clear we can easily version artifacts (tabular data), but what about versioning catalogs/projects—more high-level entities and non-tabular data? So the main questions are:
|
My view:
I'm willing to bet >95% of use cases fall into this.
Now I've seen how elegant the dvc integration can be, maybe that's the right paradigm? |
My short answer is no.
You can create "branch" and "tags" with Iceberg Table, but again that's for tabular data only. Other commentsMany experiment tracking tools start with metric tracking + git hash, then slowly adding data versioning & lineage as well. In general, a full reproducible experiment (whether or not this is important is a different story):
With Iceberg, the metadata needed to be handled externally, i.e. a SQLite db that keep tracks of runs git hash + load_version of all the data (or their snapshot_id from the Iceberg table). So when user need to do time travel, it needs to specify the load version like this:
|
I put together an example here: https://github.com/noklam/kedro-example/tree/master/iceberg-incremental TL;DR Summary
In my mind, it wouldn't be changes in Kedro, but more likely at dataset level. |
Indeed. From https://py.iceberg.apache.org/configuration/#catalogs, "PyIceberg currently has native catalog type support for REST, SQL, Hive, Glue and DynamoDB"
Yes but many users (including ourselves! like @DimedS's ETL for PyPI data) are using Kedro for data pipelines that require UPSERT/merge operations, see long discussion and pointers to user evidence in #3578
Is that unique to Iceberg? |
Thanks Nok this is super helpful, in general |
Yes, Iceberg, Delta, etc. makes sense in a lake context, particularly with upsert operations. I don't think there's anything significant blocking supporting Iceberg in Ibis; PyIceberg may not support the most efficient implementation (haven't checked lately), but it should still be possible. Spark with Iceberg is also very possible.
💯 This is what I've tried to say from the beginning; it's about whether the engine supports working with Delta, Iceberg, etc., and those will also interface directly with the relevant Delta/Iceberg catalog.
Again, for a subset of datasets, this should make sense. kedro-org/kedro-plugins#835 is technically feasible, and upserts are also possible for Spark. I think this is necessary to be a serious player in DE tooling; nobody is going to reprocess all their data every run in production. |
Yes. Upsert/Append/Merge is possible with Kedro, but it wasn't the main use case. It could be a separate argument that we want to bet more on this. I haven't implemented this fully but I my demo is based on that incremental pipeline, so we can discuss in details later.
Not really, that would be the same for Iceberg/Delta
In fact I think Spark is the most common path that Kedro user will interact with Delta/Iceberg. In theory you can write to a Delta Lake /Iceberg table through Upsert is slightly out of scope for this issue. There are things that we can do to add value:
|
Description
At the current stage by versioning we assume mapping a single version number to the corresponding versions of parameters, I/O data, and code. So one is able to retrieve a full project state including data at any point in time.
The goal is to check if we can use Iceberg to map a single version number to code, parameters, and I/O data within Kedro and how it aligns with Kedro’s workflow.
As a result, we expect a working example of kedro project used with Iceberg for versioning and some assumptions on:
Context
#4199
Market research
The text was updated successfully, but these errors were encountered: