Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixes from compiling with warnings #17

Open
wants to merge 9 commits into
base: graal-team/hpy
Choose a base branch
from

Conversation

mattip
Copy link

@mattip mattip commented Nov 15, 2023

These were almost all the fixes I needed to get a warning-less compilation on PyPy. I used this command line to build a wheel:

CFLAGS="-O0 -g3" python setup.py build --warn-error build_src --verbose-cfg bdist_wheel

PyPy still cannot cleanly load numpy, I get this after running the command above, which suggests something is off with the metaclass handling somewhere:

$ pip install dist/numpy-1.23.0.dev0+1466.g23f4a1898d-pp39-pypy39_pp73-linux_x86_64.whl
$ cd dist
$ python -c "import numpy"
$ python -c "import numpy"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/tmp/venv3pypy/lib/pypy3.9/site-packages/numpy/__init__.py", line 143, in <module>
    from . import lib
  File "/tmp/venv3pypy/lib/pypy3.9/site-packages/numpy/lib/__init__.py", line 21, in <module>
    from . import scimath as emath
  File "/tmp/venv3pypy/lib/pypy3.9/site-packages/numpy/lib/scimath.py", line 46, in <module>
    _ln2 = nx.log(2.0)
TypeError: object.__new__(float64) is not safe, use float64.__new__()

@@ -257,7 +257,7 @@ PyArrayIdentityHash_SetItem(PyArrayIdentityHash *tb, PyObject *cache_owner,


NPY_NO_EXPORT PyObject *
PyArrayIdentityHash_GetItem(PyObject *cache_owner, PyArrayIdentityHash const *tb, PyObject *const *key)
PyArrayIdentityHash_GetItem(PyArrayIdentityHash const *tb, PyObject *cache_owner, PyObject *const *key)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems wrong? The argument order should be the same as PyArrayIdentityHash_SetItem

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, makes sense. Seems that I was just not consistent in adding the cache_owner argument.

@@ -630,7 +630,7 @@ static NPY_INLINE npy_longdouble
hpy_string_to_long_double(HPyContext *ctx, HPy op)
{
const char *s;
const char *end;
char *end;
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this caused many warnings, so either my change or all the uses of end must be const char

@@ -1533,7 +1533,7 @@ datetime_as_string_impl(HPyContext *ctx, HPy NPY_UNUSED(ignored), const HPy *arg
op_flags[1] = NPY_ITER_WRITEONLY|
NPY_ITER_ALLOCATE;

iter = NpyIter_MultiNew(2, op, flags, NPY_KEEPORDER, NPY_UNSAFE_CASTING,
iter = HNpyIter_MultiNew(ctx, 2, op, flags, NPY_KEEPORDER, NPY_UNSAFE_CASTING,
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since op seems to contain handles, calling the new function seems to make sense.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we probably just overlooked that.

if (hpy_get_view_from_index(ctx, self, self_struct, &view, indices, 2, 0) < 0) {
hpy_indices[1].value = PyArray_NDIM(self_struct) - 1;
hpy_indices[1].type = HAS_ELLIPSIS;
if (hpy_get_view_from_index(ctx, self, self_struct, &view, hpy_indices, 2, 0) < 0) {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I needed to duplicate the indices struct, since there is a call to legacy get_item_pointer for struct dtypes on line 2672 above

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should indices be updated here too since both are being maintained?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. Only hpy_indices is being maintained. indices is used only in get_item_pointer in a read-only fashion.

use_min_scalar = should_use_min_scalar(nin, op, 0, NULL);
hpy_abort_not_implemented("convert op to PyArrayObject**");
PyArrayObject ** dummy;
use_min_scalar = should_use_min_scalar(nin, dummy, 0, NULL);
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should_use_min_scalar is a legacy interface, op is a HPy *

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there something preventing us from calling HPy_AsPyObject(op) here:?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That might work, but I would prefer not to add more HPy_AsPyObject.

@@ -240,8 +244,8 @@ PyUFunc_AddWrappingLoop(PyObject *ufunc_obj,
if (cmp == 0) {
continue;
}
wrapped_meth = (PyArrayMethodObject *)PyTuple_GET_ITEM(item, 1);
if (!PyObject_TypeCheck(wrapped_meth, &PyArrayMethod_Type)) {
wrapped_meth = (PyArrayMethodObject *)PyTuple_GetItem(item, 1);
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure why the macro GET_ITEM causes problems here, something about the type of item?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Me either, but using the macro failed on PyPy.

@mattip
Copy link
Author

mattip commented Nov 27, 2023

Would anyone like to review this?

@mattip
Copy link
Author

mattip commented Dec 7, 2023

Added a commit to close #18

@mattip
Copy link
Author

mattip commented Dec 20, 2023

Anyone want to review this?

HPyField *dtypes; /* PyArray_DTypeMeta **dtypes */
HPyField method; /* PyArrayMethodObject *method */
int nargs; /* method->nin + method->nout */
} PyBoundArrayMethodObject;

HPyType_HELPERS(PyBoundArrayMethodObject)
HPyType_LEGACY_HELPERS(PyBoundArrayMethodObject)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this had to become a legacy type so that someone could call HPy_AsPyObject on it?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 good point. I don't think we ever specified that.
It is not required on CPython because any object is a PyObject anyway. So, it is always possible to convert.
It is not required for GraalPy because we would return a wrapper that emulates the PyObject.
Is it required for PyPy? If so, we should discuss that and maybe properly specify (and enforce spec in debug mode).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PyPy works around it but it would be more efficient if the HELPER was correct.

@@ -4986,7 +4986,7 @@ HPyArray_ArangeObj(HPyContext *ctx, HPy start, HPy stop, HPy step, HPy /* PyArra
PyObject *new;
PyObject *py_range = HPy_AsPyObject(ctx, range);
CAPI_WARN("calling PyArray_Byteswap");
new = PyArray_Byteswap(py_range, 1);
new = PyArray_Byteswap((PyArrayObject *)py_range, 1);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we perhaps do PyArrayObject *py_range = (PyArrayObject *) HPy_AsPyObject(ctx, range) where pyrange is defined?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was has fewer casts, since most places this is used prefer a PyObject*

@@ -53,8 +53,8 @@ typedef struct {
* while normal object to string casts only accept ASCII, so it ensures
* that that the object array already contains bytes and not strings.
*/
bool python_byte_converters;
bool c_byte_converters;
npy_bool python_byte_converters;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this change for?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

npy_bool is the correct type used in the structs. This reduces the number of warnings

Copy link

@hodgestar hodgestar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the whole these changes look fairly unobjectionable to me. I asked some questions where I didn't understand things.

I would be awesome if someone from the GraalPy team could try this branch and confirm that it works for them, but if they're short on time at the moment, I think it would also be okay to merge it.

This isn't a production branch for anyone -- it's a place to push things forwards and get stuff working.

Copy link

@fangerer fangerer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, looks good to me. Thanks you!

@@ -930,7 +930,7 @@ typedef struct PyArrayFlagsObject {
int flags;
} PyArrayFlagsObject;

HPyType_HELPERS(PyArrayFlagsObject);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know from the top of your head if PyArrayFlagsObject still needs to be legacy? It looks like it could be a pure HPy type but we just forgot to remove PyObject_HEAD.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I don't know. We could check it in a follow-up PR

@@ -257,7 +257,7 @@ PyArrayIdentityHash_SetItem(PyArrayIdentityHash *tb, PyObject *cache_owner,


NPY_NO_EXPORT PyObject *
PyArrayIdentityHash_GetItem(PyObject *cache_owner, PyArrayIdentityHash const *tb, PyObject *const *key)
PyArrayIdentityHash_GetItem(PyArrayIdentityHash const *tb, PyObject *cache_owner, PyObject *const *key)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, makes sense. Seems that I was just not consistent in adding the cache_owner argument.

@@ -108,18 +108,21 @@ NPY_NO_EXPORT HPyType_Spec HPyArray_PyIntAbstractDType_spec = {
.name = "numpy._IntegerAbstractDType",
.basicsize = sizeof(PyArray_Descr),
.flags = Py_TPFLAGS_DEFAULT,
.builtin_shape = HPyType_BuiltinShape_Legacy

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is a HPyType(_LEGACY)_HELPERS(PyArray_Descr), we should use SHAPE(PyArray_Descr) here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, but as far as I can tell there is no such use of the HEPLERS.

HPyField *dtypes; /* PyArray_DTypeMeta **dtypes */
HPyField method; /* PyArrayMethodObject *method */
int nargs; /* method->nin + method->nout */
} PyBoundArrayMethodObject;

HPyType_HELPERS(PyBoundArrayMethodObject)
HPyType_LEGACY_HELPERS(PyBoundArrayMethodObject)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 good point. I don't think we ever specified that.
It is not required on CPython because any object is a PyObject anyway. So, it is always possible to convert.
It is not required for GraalPy because we would return a wrapper that emulates the PyObject.
Is it required for PyPy? If so, we should discuss that and maybe properly specify (and enforce spec in debug mode).

@@ -1533,7 +1533,7 @@ datetime_as_string_impl(HPyContext *ctx, HPy NPY_UNUSED(ignored), const HPy *arg
op_flags[1] = NPY_ITER_WRITEONLY|
NPY_ITER_ALLOCATE;

iter = NpyIter_MultiNew(2, op, flags, NPY_KEEPORDER, NPY_UNSAFE_CASTING,
iter = HNpyIter_MultiNew(ctx, 2, op, flags, NPY_KEEPORDER, NPY_UNSAFE_CASTING,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we probably just overlooked that.

Co-authored-by: Simon Cross <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants