-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add writing functionality for dataframes #45
base: develop
Are you sure you want to change the base?
Conversation
rdata/conversion/_conversion.py
Outdated
@@ -820,6 +872,9 @@ def _convert_next( # noqa: C901, PLR0912, PLR0915 | |||
|
|||
value = None | |||
|
|||
elif obj.info.type == parser.RObjectType.ALTREP: | |||
value = convert_altrep_to_range(obj) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel that I am missing something here. Altreps should not be present here when expand_altrep
is True
(and that is the default). Also, I am not sure that it is a good idea to convert to different Python objects depending on the internal way some R object is stored (AFAIK, altreps are supposed to be an implementation detail).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This relates to the support for writing dataframes with range indices like pd.DataFrame(..., index=range(2, 5))
.
The reason that this change is in this PR is that it allows testing R-to-Python-to-R roundtrip for such a dataframe (in R data.frame(..., row.names=2:4)
, this is test_dataframe_range_rownames.rds
). Here row names is compact_intseq altrep, which would get converted to a regular array by default, which would then fail the roundtrip.
But, other than enabling that testing, this change wouldn't really belong to this PR, so it could be removed if you think that's better.
In general, I feel that it could be nicer if altreps would be expanded in the conversion stage instead of the parsing stage. The reason is that then the RData object would be a one-to-one representation of the file contents, which is not the case for altreps at the moment (unless expand_altrep=False
). I have interpreted compact_intseq altrep to implement the same idea as Python's range object, so it would be useful if users would have an option to choose whether they want this altrep to be converted to a numpy array or a range object, which seems not possible (or easy?) if altreps are expanded already in parsing. This would be larger change though (beyond this PR). What do you think in general?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think altreps are intended to be an internal representation detail. That is why I made the default to expand them in the parser: I assumed that, except for very specific situations, users would prefer to ignore their existence.
That said, IMHO the default conversion routines should be able to deal with altreps (in case that expand_altreps
is False
). I still think they should convert it to the same object as if they weren't altreps, for consistency.
For me ranges are a very different beast than NumPy arrays, and returning such different kind of objects for the "same" underlying R object would be confusing. Moreover, there is no equivalent to range
for compact sequences of float.
If we want to convert an array back to an altrep for space savings we could probably try to detect if the array is a sequence (with np.diff
, a subtraction, and np.all
or something like that it would be easy, at least for integer dtype). However that would convert to altrep even arrays that were not stored as altreps in the original file, for whatever reason. So, the choice would be between no altreps at all when writing, or all possible ones.
For the above reasons I think it is preferrable not to consider altreps in the roundtrip comparison (that is, comparing both files with expand_altreps=True
. Any thoughts on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with your points on range and that it would be confusing to always convert intseq altrep to a range object as it doesn't behave similarly to numpy array (in contrast to behavior in R). So, I removed this altrep-to-range conversion from R-to-Python conversion.
In Python-to-R conversion side, I also removed the general conversion of range objects, but changed that logic to apply to pd.RangeIndex only, that is, pd.RangeIndex converts to altrep (if possible; step=1) and pd.Index converts to array. I added a few tests for this case as the roundtrip comparison can't reach this case. Do you think this is ok?
If we want to convert an array back to an altrep for space savings we could probably try to detect if the array is a sequence
I agree that could be done at some point if needed.
For the above reasons I think it is preferrable not to consider altreps in the roundtrip comparison (that is, comparing both files with expand_altreps=True. Any thoughts on this?
I agree. The R-Python-R roundtrip tests do file checking only for expand_altreps=False
(which skips all files with altreps as altreps are then unhandled in R-to-Python conversion).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think this is ok?
In principle it is ok to write altreps when possible by default. I am not sure if we should add an option for the Converter
not to use altreps, for having compatibility with tools that do not understand them.
@@ -430,7 +482,7 @@ def dataframe_constructor( | |||
and isinstance(row_names, np.ma.MaskedArray) | |||
and row_names.mask[0] | |||
) | |||
else tuple(row_names) | |||
else row_names |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason was to keep range object as range object instead of expanding it to a tuple of values (relates to the altrep comment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd keep this change even though I removed the altrep-to-range conversion. The reason is that if a user would like to create a custom altrep_constructor_dict
that maps compact_intseq to a range object, then that range object wouldn't be expanded here but it would be passed to dataframe as such. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say that such mapping is an error. However, I do not have anything against this particular change, unless the tuple
call was necessary for some reason.
Returns: | ||
Numpy array. | ||
""" | ||
if isinstance(pd_array, pd.arrays.StringArray): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, is there some reason not to be able to use masked arrays whenever there are NA values, independently of the array type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That could be done. I think it would clarify many things, but it would require changes in parsing side too.
The main issue here could be string arrays, where NA values are None
at the moment, so should parsing of string arrays return a possibly masked array instead? Also in writing, if user has strings = ["hello", None]
, now this can be converted to valid "string" array like array = np.array(strings)
(which is of object dtype, requiring special treatment of object arrays). With masked array user would need to do e.g. array = np.ma.array(data=["" if s is None else s for s in strings], mask=[s is None for s in strings])
, which might be more cumbersome for users to work with. A benefit would be though that this array is of unicode dtype, which would simplify the numpy array handling.
Do you think it would be useful to start using masked arrays in all these cases? (I feel it could fit better to a different PR though due to changes in parsing?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should probably used the new string dtype in the future, when available. This already supports missing data using sentinel values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I agree. So let's leave string arrays as they are for now.
How about float arrays with missing values? Would it be useful to convert them to masked arrays similarly to integer arrays? (I think that change could go to another PR though as it changes parsing behavior.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that it makes sense to do it for float arrays (in a different PR).
rdata/missing.py
Outdated
raw_dtype = f"V{array.dtype.itemsize}" | ||
return array.view(raw_dtype) == np.array(na).view(raw_dtype) # type: ignore [no-any-return] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if I understand what is happening here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a comment to the code (and changed void type to unsigned int). The reason is that as R_FLOAT_NA is a NaN, so it follows NaN-logic: R_FLOAT_NA == R_FLOAT_NA
is False and R_FLOAT_NA != R_FLOAT_NA
is True, but also R_FLOAT_NA == np.nan
is False and R_FLOAT_NA != np.nan
is True, although R_FLOAT_NA and np.nan are different. So, we need to do byte-by-byte comparison of values to distinguish R_FLOAT_NA from other NaN values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see.
rdata/missing.py
Outdated
try: | ||
return is_na(np.array(array, dtype=np.int32)) | ||
except OverflowError: | ||
return is_na(np.array(array)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also think this deserves some explanation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a comment to the code. Basically, R seems not to support larger integers than 32-bit, so this attempts to convert Python's int (64 bit or larger) to such (or proceed with larger int and fail later).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood.
Co-authored-by: Carlos Ramos Carreño <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vnmabus Thank you for the review. I have pushed the changes and left some comments/questions for discussion.
rdata/conversion/_conversion.py
Outdated
@@ -820,6 +872,9 @@ def _convert_next( # noqa: C901, PLR0912, PLR0915 | |||
|
|||
value = None | |||
|
|||
elif obj.info.type == parser.RObjectType.ALTREP: | |||
value = convert_altrep_to_range(obj) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This relates to the support for writing dataframes with range indices like pd.DataFrame(..., index=range(2, 5))
.
The reason that this change is in this PR is that it allows testing R-to-Python-to-R roundtrip for such a dataframe (in R data.frame(..., row.names=2:4)
, this is test_dataframe_range_rownames.rds
). Here row names is compact_intseq altrep, which would get converted to a regular array by default, which would then fail the roundtrip.
But, other than enabling that testing, this change wouldn't really belong to this PR, so it could be removed if you think that's better.
In general, I feel that it could be nicer if altreps would be expanded in the conversion stage instead of the parsing stage. The reason is that then the RData object would be a one-to-one representation of the file contents, which is not the case for altreps at the moment (unless expand_altrep=False
). I have interpreted compact_intseq altrep to implement the same idea as Python's range object, so it would be useful if users would have an option to choose whether they want this altrep to be converted to a numpy array or a range object, which seems not possible (or easy?) if altreps are expanded already in parsing. This would be larger change though (beyond this PR). What do you think in general?
@@ -430,7 +482,7 @@ def dataframe_constructor( | |||
and isinstance(row_names, np.ma.MaskedArray) | |||
and row_names.mask[0] | |||
) | |||
else tuple(row_names) | |||
else row_names |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason was to keep range object as range object instead of expanding it to a tuple of values (relates to the altrep comment).
Returns: | ||
Numpy array. | ||
""" | ||
if isinstance(pd_array, pd.arrays.StringArray): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That could be done. I think it would clarify many things, but it would require changes in parsing side too.
The main issue here could be string arrays, where NA values are None
at the moment, so should parsing of string arrays return a possibly masked array instead? Also in writing, if user has strings = ["hello", None]
, now this can be converted to valid "string" array like array = np.array(strings)
(which is of object dtype, requiring special treatment of object arrays). With masked array user would need to do e.g. array = np.ma.array(data=["" if s is None else s for s in strings], mask=[s is None for s in strings])
, which might be more cumbersome for users to work with. A benefit would be though that this array is of unicode dtype, which would simplify the numpy array handling.
Do you think it would be useful to start using masked arrays in all these cases? (I feel it could fit better to a different PR though due to changes in parsing?)
rdata/missing.py
Outdated
raw_dtype = f"V{array.dtype.itemsize}" | ||
return array.view(raw_dtype) == np.array(na).view(raw_dtype) # type: ignore [no-any-return] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a comment to the code (and changed void type to unsigned int). The reason is that as R_FLOAT_NA is a NaN, so it follows NaN-logic: R_FLOAT_NA == R_FLOAT_NA
is False and R_FLOAT_NA != R_FLOAT_NA
is True, but also R_FLOAT_NA == np.nan
is False and R_FLOAT_NA != np.nan
is True, although R_FLOAT_NA and np.nan are different. So, we need to do byte-by-byte comparison of values to distinguish R_FLOAT_NA from other NaN values.
rdata/missing.py
Outdated
try: | ||
return is_na(np.array(array, dtype=np.int32)) | ||
except OverflowError: | ||
return is_na(np.array(array)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a comment to the code. Basically, R seems not to support larger integers than 32-bit, so this attempts to convert Python's int (64 bit or larger) to such (or proceed with larger int and fail later).
rdata/conversion/to_r.py
Outdated
if self.df_attr_order is not None: | ||
attributes = {k: attributes[k] for k in self.df_attr_order} | ||
|
||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! I restructured these conversion to user-definable constructor functions similarly to R-to-Python conversion. I separated also some other conversion logic to functions.
For pd.Categorical
this was simple, but for pd.DataFrame
a bit more complex as it may contain pd.Categorical
or some other more complex objects. For that reason, the constructor functions take also parameter convert_to_r_object
that points back to ConverterFromPythonToR.convert_to_r_object
(which then in turn can keep track of references etc).
Do you have a suggestion for naming convention? Now there is dataframe_constructor
for both R-to-Python and Python-to-R conversions.
rdata/conversion/_conversion.py
Outdated
@@ -820,6 +872,9 @@ def _convert_next( # noqa: C901, PLR0912, PLR0915 | |||
|
|||
value = None | |||
|
|||
elif obj.info.type == parser.RObjectType.ALTREP: | |||
value = convert_altrep_to_range(obj) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think this is ok?
In principle it is ok to write altreps when possible by default. I am not sure if we should add an option for the Converter
not to use altreps, for having compatibility with tools that do not understand them.
@@ -430,7 +482,7 @@ def dataframe_constructor( | |||
and isinstance(row_names, np.ma.MaskedArray) | |||
and row_names.mask[0] | |||
) | |||
else tuple(row_names) | |||
else row_names |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say that such mapping is an error. However, I do not have anything against this particular change, unless the tuple
call was necessary for some reason.
Returns: | ||
Numpy array. | ||
""" | ||
if isinstance(pd_array, pd.arrays.StringArray): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that it makes sense to do it for float arrays (in a different PR).
@@ -40,6 +42,7 @@ def write_rds( | |||
compression: Compression. | |||
encoding: Encoding to be used for strings within data. | |||
format_version: File format version. | |||
constructor_dict: Dictionary mapping Python types to R classes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is not really true, right? It maps Python classes to functions to convert them to R classes (which is more powerful, as it can choose a different R class depending on the attributes of the object).
@@ -1,4 +1,5 @@ | |||
"""Utilities for converting R objects to Python ones.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This docstring should change.
|
||
ConstructorReturnValue = tuple[RObjectType, Any, dict[str, Any] | None] | ||
ConstructorFunction1 = Callable[[Any], ConstructorReturnValue] | ||
ConstructorFunction2 = Callable[[Any, Converter], ConstructorReturnValue] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For writing, as this is a new API, I think we can force the signature to always include the Converter
. For reading we can't do that, as there are existing constructors without that parameter, and thus we would have to add the new signature and deprecate the old one.
|
||
ConstructorReturnValue = tuple[RObjectType, Any, dict[str, Any] | None] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if it wouldn't be better (less ambiguous and more flexible) to return the constructed R objects, even if that breaks symmetry with the reading constructors.
rdata/conversion/to_r.py
Outdated
if self.df_attr_order is not None: | ||
attributes = {k: attributes[k] for k in self.df_attr_order} | ||
|
||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would add that as mandatory (at least for writing, as there is not existing code that requires backwards compatibility).
rdata/missing.py
Outdated
raw_dtype = f"V{array.dtype.itemsize}" | ||
return array.view(raw_dtype) == np.array(na).view(raw_dtype) # type: ignore [no-any-return] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see.
rdata/missing.py
Outdated
try: | ||
return is_na(np.array(array, dtype=np.int32)) | ||
except OverflowError: | ||
return is_na(np.array(array)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood.
References to issues or other PRs
Closes #20.
Describe the proposed changes
This PR will add support for writing pandas dataframes. This turned out to be a quite large change to the current Python-to-R conversion functionality. Overview of changes:
rdata.conversion.to_r
reorganized, a classConverterFromPythonToR
added to simplify keeping track of references in RData files.rdata.missing
.convert_altrep_to_range()
(rdata.conversion._conversion
) that enable converting compact intseq to range object (e.g. as a dataframe index).Additional information
The functions in
rdata.missing
could be useful for users to handle NA values in the desired way, e.g.,pd.arrays.FloatingArray(array)
would set all NaNs (including R's NA value) as "missing", butpd.arrays.FloatingArray(array, is_na(array))
can be used to set only R's NA values as "missing".Checklist before requesting a review