-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking issue: Add Array API standard support #21
Comments
I added some methods you can call for most of these! Some of them need to be external library calls |
@willow-ahrens thank you - that's exactly what I needed! 😄 |
Don't forget about |
Thanks! Added a bullet for it. |
Hello! Re: #88 I think the root of this is having a way to do boolean indexing for us. I don't see anything here about boolean indexing (https://data-apis.org/array-api/latest/API_specification/indexing.html#boolean-array-indexing) along axes. And from what I can tell, it's not supported in the julia library (or it is but only as a custom loop: https://github.com/finch-tensor/Finch.jl/blob/2e89d2228a777a8238eccd145cfa6a7b405eb76c/docs/src/docs/language/mask_sugar.md?plain=1#L5); however it is part of the array-api. Is there a plan to support it? I would really love to use this but lacking a proper indexing method is a big blocker (assuming I have this right). |
@ilan-gold Hi Ilan! Thanks for your interest in the project! Our goal is to support all of the operations in the array API, so if something has been left out it likely wasn't intentional. It is quite feasible to implement this operation, and I've added it to the list! The only caveat is that I don't think we will be able to implement it in a way that can fuse it with other operations, at least for now. I think I left it out of the julia implementation just because I needed to get things up and running quickly, but Boolean indexing is actually part of the Julia indexing api as well, so this addition is in scope for both languages! |
Could you highlight what this would mean practically for users? This is mostly a performance, not a practical (in the sense of "all downstream operations will break if you boolean index" thing), no? |
Yes, this is a performance thing, not a semantics thing. What it would mean for users is just that if we wish to do a logical indexing operation, we would need to materialize the inputs and outputs of the operation (This is normal for most libraries, Finch just provides a special fusion optimization for certain operations). |
Right, ok, interesting. Thanks @willow-ahrens I'll keep my ear to the ground in the meantime. Your library is a big target for us using the array-api, really looking forward to digging into it a bit more! |
Hi @willow-ahrens @hameerabbasi,
This issue is meant to track progress of implementing Array API standard for
finch-tensor
.I thought that we could try adding short notes to bullet-points, saying which
Finch.jl
functions should be called to implement given entry. I think we already had some ideas during one of our first calls.Array API: https://data-apis.org/array-api/latest/index.html
Backlog
main namespace
astype
- API:finch.astype
function #15 - eageradd
,multiply
,cos
, ...) - API: Lazy API #17 (partially...)xp.prod
,xp.sum
) -jl.sum
andjl.prod
, also justjl.reduce
- API: Lazy API #17matmul
- implemented withfinch.tensordot
for non-stacked input. Should be rewritten withjl.mul
/ Finch einsum.tensordot
-finch.tensordot
- API: Implementtensordot
andmatmul
#22where
-jl.broadcast(jl.ifelse, cond, a, b)
- API: Implementwhere
andnonzero
#30argmin
/argmax
-jl.argmin
(bug willow if this isn't implemented already) - eager for now #90take
-jl.getindex
eager for nownonzero
- this is an eager function, but it is implemented asffindnz(arr)
- API: Implementwhere
andnonzero
#30asarray
,ones
,full
,full_like
, ... -finch.Tensor
constructor, as well asjl.copyto!(arr, jl.broadcasted(Scalar(1))
, as well as changing the default of the tensor withTensor(Dense(Element(1.0)))
. We may need to distinguish some of these. API: Addasarray
function #28, API: Addeye
function #32max
,mean
,min
,std
,var
unique_all
,unique_counts
,unique_inverse
,unique_values
- eagerall
,any
concat
- eager for nowexpand_dims
- lazyflip
-eager for nowreshape
- eager for nowroll
- eager for nowsqueeze
- lazystack
- eager for nowargsort
/sort
- eagerbroadcast_arrays
- eager for nowbroadcast_to
- eager for nowcan_cast
/finfo
/iinfo
/result_type
bitwise_and
/bitwise_left_shift
/bitwise_invert
/bitwise_or
/bitwise_right_shift
/bitwise_xor
linalg
namespace(I copied those from the benchmark suite. If something turns out to be unfeasible we can drop it.)
linalg.vecdot
-finch.tensordot
linalg.vector_norm
-finch.norm
linalg.trace
- eagerlinalg.tensordot
- implemented in the main namespace. Just needs an aliaslinalg.outer
#89linalg.cross
- eager for nowlinalg.matrix_transpose
- lazylinalg.matrix_power
- eager (call matmul on sparse matrix until it gets too dense)linalg.matrix_norm
- fornuc
or2
, call external library. Forfro
,inf
,1
,0
,-1
,-inf
, calljl.norm
.xp.linalg.diagonal
-finch.tensordot(finch.diagmask(), mtx)
xp.linalg.cholesky
- call CHOLMOD or somethingxp.linalg.det
- call EIGEN or somethingxp.linalg.eigh
- call external libraryxp.linalg.eigvalsh
- call external libraryxp.linalg.inv
- call external library -scipy.sparse.linalg.inv
xp.linalg.matrix_rank
- call external libraryxp.linalg.pinv
- call external libraryTensor
methods and attributesTensor.to_device()
-finch.moveto
miscellaneous
The text was updated successfully, but these errors were encountered: