Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can it be used with accelerate? #8

Open
kamilc opened this issue May 7, 2018 · 8 comments
Open

Can it be used with accelerate? #8

kamilc opened this issue May 7, 2018 · 8 comments

Comments

@kamilc
Copy link

kamilc commented May 7, 2018

This looks fantastic!

One (possibly) trivial question I have is: can this be used with accelerate? My Haskell got a little bit rusty - I can't tell for sure just from looking at the types...

Thanks!

@mstksg
Copy link
Owner

mstksg commented May 7, 2018

Thanks!

As of the recent version 0.2, it should be able to used with accelerate, provided the types all have a Backprop instance. I'm currently working on an adapter now that gives all the right instances to accelerate types, but there's nothing stopping anyone from experimenting with the instances on their own!

The bigger work issue is probably porting all of the linear algebra operations with their gradients, which is a bit of a boilerplatey undertaking (see hmatrix-backprop). Accelerate types might work, but you also would need to lift linear algebra functions (like dot products, matrix multiplications, etc.) by providing their individual gradients as well for the user. This is something I'm also working on getting out soon!

@kamilc
Copy link
Author

kamilc commented May 8, 2018

@mstksg Awesome! Thanks for the pointers to the hmatrix integration - I'll have a look. I can't promise you much help as of now unfortunately but this might change in the near future.

@cpennington
Copy link

cpennington commented May 16, 2018

Have you seen https://github.com/tmcdonell/accelerate-blas?

It could be a good place to start augmenting with backprop variables.

@vincent-hui
Copy link

Hi all,

Did anyone try to use backprop with accelerate?

@masterdezign
Copy link

masterdezign commented May 6, 2019

I am trying to figure it out now. My problem is:

if a neural network with fully connected layers is represented as

network :: (Reifies s W) =>
=> BVar s (Acc (Matrix Float)) -- ^ Inputs 
-> BVar s [Acc (Matrix Float)]  -- ^ Weights
-> BVar s (Acc (Matrix Float))  -- ^ Outputs

then how do I represent [Acc (Matrix Float)] as Arrays a => Acc a?

An additional problem is in the discrepancy between pure equations (backprop library) and impure, GPU-specific operations (Accelerate DSL), which might ultimately result in a poor speed annihilating the advantages of GPU use.

@masterdezign
Copy link

I am using the matrix-matrix multiplication (<>) from accelerate-blas package as a basis of a linear layer:

linear :: (Reifies s W, Numeric e)
=> BVar s (Acc (Matrix e))
-> BVar s (Acc (Matrix e)) 
-> BVar s (Acc (Matrix e))

@mstksg
Copy link
Owner

mstksg commented Aug 27, 2019

One approach might be to give Acc a backprop instance, like

instance Backprop (Acc a) where
    zero :: Acc a -> Acc a
    one :: Acc a -> Acc a
    plus :: Acc a -> Acc a -> Acc a

As long as you can assemble the the DSL purely, it should work. One issue that might come up, however, is the partsVar mechanic using lens-based accessing.

@masterdezign
Copy link

I would really love to try, but I am really, really afraid of getting a result similar to this one. Are there any limitations that you are aware of?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants